Installing systemd inside a ubuntu14.04 docker container - Is it possible? - docker

Am trying to install and configure openstack (devstack) inside docker container. While installing am getting the following error
"Failed to get D-Bus connection: No connection to service manager."
Later, I checked and found that its because of systemd problem. When I tried executing the command systemd
$>systemd
Am getting the following output.
Trying to run as user instance, but the system has not been booted with systemd.
Following are the things which am used.
Host machine OS : Ubuntu 14.04,
Docker Version : Docker version 1.12.4, build 1564f02,
Docker Container OS : Ubuntu 14.04
Can anyone help in this. Thanks in advance.

First of all, systemd expects /sys/fs/cgroup to be mounted. Additionally, you must make the container privileged, or else this happens:
docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged -it --rm ubuntu
Then you can go ahead and run /bin/systemd --system --unit=basic.target from bash, and it should run normally (with some errors of course, because Docker does not virtualize an entire system, nor is the library:ubuntu image more than the minimum size required to run properly):
After you have systemd running (semi-)properly, you can simply use a docker stop to stop the container.
This post is based on my own research, a few weeks of it too, for a project I like to call initbuntu (originally I tried to get init running, but running systemd directly was my only solution after all my failed tries). The container will be available on Docker Hub as logandark/initbuntu, Soon™. For now, a broken copy (or not broken, I dunno) is available there at the time of posting.
Sources (kinda):
/sys/fs/cgroup: Here
systemd --system: A StackOverflow post I lost the link to.

Existing DevStack on Docker Project
First of all, you can get a preconfigured Dockerfile with DevStack Ocata/Pike on Docker here. The repository also contains further information on DevStack and containers.
Build Your Own Image
Running systemd in Docker is certainly possible and has been done before. I found Ubuntu 16.04 LTS is a good foundation for the Docker host as well as the base image.
Your systemd/DevStack Dockerfile needs this configuration, which also cleans up services you probably don't want inside a Docker container:
FROM ubuntu:16.04
#####################################################################
# Systemd workaround from solita/ubuntu-systemd and moby/moby#28614 #
#####################################################################
ENV container docker
# No need for graphical.target
RUN systemctl set-default multi-user.target
# Gracefully stop systemd
STOPSIGNAL SIGRTMIN+3
# Cleanup unneeded services
RUN find /etc/systemd/system \
/lib/systemd/system \
-path '*.wants/*' \
-not -name '*journald*' \
-not -name '*systemd-tmpfiles*' \
-not -name '*systemd-user-sessions*' \
-exec rm \{} \;
# Workaround for console output error moby/moby#27202, based on moby/moby#9212
CMD ["/bin/bash", "-c", "exec /sbin/init --log-target=journal 3>&1"]
If you intend to run OpenStack/DevStack inside said container, it might save you lots of trouble to start it privileged instead of defining separate security capabilities and volumes:
docker run \
--name devstack \
--privileged \
--detach \
image
To get a bash inside your new systemd container try this:
docker exec \
--tty \
--interactive \
devstack \
bash

Systemd should work inside properly configured container. You can run the container in privileged mood to run systemd.
"Systemd cannot run without SYS_ADMIN, less privileges than that won't work (see #2296 (comment)). Yes it's possible to make it "easier" (a tool that automatically sets these), but it'll still need certain privileges"
See this Github issue
After all docker is an application container and it runs the process which you specify at run time , after completing that process it will exit. May be you need an OS container or a virtual machine for your use case. See OS container vs Application Container here

In most cases the error messages comes up because an installer program has tried to run "systemctl start ". Unlike initscripts the systemctl command will not try execute the start script directly - instead it tries to contact the systemd daemon to execute the start sequence of the service. So all services have a common parent in the systemd daemon.
It can be quite overdone to run a systemd daemon inside a docker container just to start a service. You could use the systemctl-docker-replacement overwriting /usr/bin/systemctl in which case the target service is started without the help of a systemd daemon. It runs the ExecStart from the *.service file directly.

Related

Docker System has not been booted with systemd as init system

I have an Ubuntu 18.04 image runing on my docker container. I login into it and installed Openresty. also installed systemd. When I use command systemctl I get this error:
System has not been booted with systemd as init system (PID 1). Can't operate.
How can I fix it?
If I understand the OP, he is trying to run systemctl within the container. This does not work because systemd is not running within the container to begin with. It cannot be done in unprivileged containers. There is another question here in SO about why he should not run systemd within a container.
I quickly googled and found this 2014 page about using systemd within a container in docker, where there is a short explanation. The fix is to use a privileged container (running docker run --privileged ...), which is arguably a bad idea but might suit the OP. There is a 2019 update of that last article, and the bottomline is they developed their own container engine (so no docker).
The obvious solution would be to have a single service, so no need for systemd, although that might not be possible in the OP's case.
In summary, possible solutions:
not to use systemd
use a privileged container
not to use docker
In your terminal, you can type:
$ sudo dockerd
and the magic is happen
So, Open other terminal and try it
$ docker ps -a
If you still have a problem with permission, run:
$ sudo usermod -aG docker your-user
Did you try to use: sudo /etc/init.d/docker start instead of systemd ?
I have a similar problem and it solves it.
You need to start your container by this command to enable systemd.
docker run -itd --privileged docker pull ubuntu:18.04 /usr/sbin/init
After toying with Systemd myself and bumping into this I found a good solution to work around this in Docker.
You can setup a cronjob to run on container reboot.
Dockerfile.yml:
COPY startup.sh /home/$USERNAME
WORKDIR /home/$USERNAME
RUN chmod +x startup.sh
RUN runuser -u $USERNAME -- echo "#reboot /home/$USERNAME/startup.sh" >> cronjobs
RUN runuser -u $USERNAME -- crontab cronjobs
RUN runuser -u $USERNAME -- rm cronjobs
https://askubuntu.com/questions/814/how-to-run-scripts-on-start-up#816
To complement #javier-gonzalez answer, if you're following running systemd within container AND getting the error bash: /usr/sbin/init: No such file or directory when trying to run the container, you can use /lib/systemd/systemd as ENTRYPOINT in your Dockerfile instead since /usr/sbin/init since it is just a symlink to the same thing.
FROM ubuntu:<anyversion>
ENTRYPOINT ["/lib/systemd/systemd"]
You may have forgotten to start docker before using it
sudo service docker start

Running docker command from docker container

Need to write a Dockerfile that installs docker in container-a. Because container-a needs to execute a docker command to container-b that's running alongside container-a.
My understanding is you're not supposed to use "sudo" when writing the Dockerfile.
But I'm getting stuck -- what user to I assign to docker group? When you run docker exec -it, you are automatically root.
sudo usermod -a -G docker whatuser?
Also (and I'm trying this out manually inside container-a to see if it even works) you have to do a newgrp docker to activate the changes to groups. Anytime I do that, I end up sudo'ing when I haven't sudo'ed. Does that make sense? The symptom is -- I go to exit the container, and I have to exit twice (as if I changed users).
What am I doing wrong?
If you are trying to run the containers alongside one another (not container inside container), you should mount the docker socket from the host system and execute commands to other containers that way:
docker run --name containera \
-v /var/run/docker.sock:/var/run/docker.sock \
yourimage
With the the docker socket mounted you can control docker on the host system.

Puppet container wont start automatically

So I have created a puppet container for a certificate authority. It works, but does not start correctly. Here is my Dockerfile:
FROM centos:6
RUN yum update -y
RUN rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
RUN yum install -y ruby puppet-server
ADD puppet.conf /etc/puppet/puppet.conf
VOLUME ["/var/lib/puppet/ssl/"]
EXPOSE 9140
#NOTHING BELOW THIS COMMENT RUNS
RUN puppet master --verbose
CMD service puppetmaster start
CMD chkconfig puppetmaster on
CMD []
I can then start the container with the following run(note that I named the image ca-puppet):
docker run -d -p 9140:9140 -it --name ca-puppet \
-v /puppet-host/ssl:/var/lib/puppet/ssl \
ca-puppet bash
The issue is that I need to docker exec into the container and run the following commands to get it started and create the ca certificates in its ssl directory:
puppet master --verbose
service puppetmaster start
chkconfig puppetmaster on
I have a feeling I should be using some other Docker file commands to run the last 3 commands. What should that be?
There can only be one CMD instruction in a Dockerfile. If you list
more than one CMD then only the last CMD will take effect.
also
If the user specifies arguments to docker run then they will override
the default specified in CMD.
However, using the default process manager (e.g., SysV, systemd, etc) in Docker for most mainstream distros can cause problems (without making a lot of modifications). You generally don't need it, however -- particularly if you're only running one application (as is often considered best practice). In a Docker container, you generally want your primary application to be the first process (PID 1).
You can do this by not daemonizing puppet and start it as the default container command via something like:
CMD puppet master --verbose --no-daemonize
And use the Docker host to manage it (via restart policy, etc.).

Start and attach a docker container with X11 forwarding

There are various articles like this, this and this and many more, that explains how to use X11 forwarding to run GUI apps on Docker. I am using a Centos Docker container.
However, all of these approaches use
docker run
with all appropriate options in order to visualize the result. Any use of docker run creates a new image and performs the operation on top of that.
A way to work in the same container is to use docker start followed by docker attach and then executing the commands on the prompt of the container. Additionally, the script (let's say xyz.sh) that I intend to run on Docker container resides inside a folder MyFiles in the root directory of the container and accepts a parameter as well
So is there a way to run the script using docker start and/or docker attach while also X11-forwarding it?
This is what I have tried, although would like to avoid docker run and instead use docker start and docker attach
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
centos \
cd MyFiles \
./xyz.sh param1
export containerId='docker ps -l -q'
This in turn throws up an error as below -
/usr/bin/cd: line 2: cd: MyFiles/: No such file or directory
How can I run the script xyz.sh under MyFiles on the Docker container using docker start and docker attach?
Also since the location and the name of the script may vary, I would like to know if it is mandatory to include each of these path in the system path variable on the Docker container or can it be done at runtime also?
It looks to me your problem is not with X11 forwarding but with general Docker syntax.
You could do it rather simply:
sudo docker run -it \
--env="DISPLAY" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-w MyFiles \
--rm \
centos \
bash -c xyz.sh param1
I added:
--rm to avoid stacking old dead containers.
-w workdir, obvious meaning
/bin/bash -c to get sure your script is interpreted by bash.
How to do without docker run:
run is actually like create then start. You can split it in two steps if you prefer.
If you want to attach to a container, it must be running first. And for it to be running, there must be a process currently running inside.

I lose my data when the container exits

Despite Docker's Interactive tutorial and faq I lose my data when the container exits.
I have installed Docker as described here: http://docs.docker.io/en/latest/installation/ubuntulinux
without any problem on ubuntu 13.04.
But it loses all data when exits.
iman#test:~$ sudo docker version
Client version: 0.6.4
Go version (client): go1.1.2
Git commit (client): 2f74b1c
Server version: 0.6.4
Git commit (server): 2f74b1c
Go version (server): go1.1.2
Last stable version: 0.6.4
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:05:47 Unable to locate ping
iman#test:~$ sudo docker run ubuntu apt-get install ping
Reading package lists...
Building dependency tree...
The following NEW packages will be installed:
iputils-ping
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 56.1 kB of archives.
After this operation, 143 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ precise/main iputils-ping amd64 3:20101006-1ubuntu1 [56.1 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 56.1 kB in 0s (195 kB/s)
Selecting previously unselected package iputils-ping.
(Reading database ... 7545 files and directories currently installed.)
Unpacking iputils-ping (from .../iputils-ping_3%3a20101006-1ubuntu1_amd64.deb) ...
Setting up iputils-ping (3:20101006-1ubuntu1) ...
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:06:11 Unable to locate ping
iman#test:~$ sudo docker run ubuntu touch /home/test
iman#test:~$ sudo docker run ubuntu ls /home/test
ls: cannot access /home/test: No such file or directory
I also tested it with interactive sessions with the same result. Did I forget something?
EDIT: IMPORTANT FOR NEW DOCKER USERS
As #mohammed-noureldin and others said, actually this is NOT a container exiting. Every time it just creates a new container.
You need to commit the changes you make to the container and then run it. Try this:
sudo docker pull ubuntu
sudo docker run ubuntu apt-get install -y ping
Then get the container id using this command:
sudo docker ps -l
Commit changes to the container:
sudo docker commit <container_id> iman/ping
Then run the container:
sudo docker run iman/ping ping www.google.com
This should work.
When you use docker run to start a container, it actually creates a new container based on the image you have specified.
Besides the other useful answers here, note that you can restart an existing container after it exited and your changes are still there.
docker start f357e2faab77 # restart it in the background
docker attach f357e2faab77 # reattach the terminal & stdin
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
In addition to Unferth's answer, it is recommended to create a Dockerfile.
In an empty directory, create a file called "Dockerfile" with the following contents.
FROM ubuntu
RUN apt-get install ping
ENTRYPOINT ["ping"]
Create an image using the Dockerfile. Let's use a tag so we don't need to remember the hexadecimal image number.
$ docker build -t iman/ping .
And then run the image in a container.
$ docker run iman/ping stackoverflow.com
There are really great answers above to the asked question. There might be no need for another answer but still I want to give my personal opinion on the topic in the simplest words possible.
Here are some points about containers & images that will help us for a conclusion:
A docker image can be:
created-from-a-given-container
deleted
used-to-create-any-number-of-containers
A docker container can be:
created-from-an-image
started
stopped
restarted
deleted
used-to-create-any-number-of-images
A docker run command does this:
Downloads an image or uses a cached image
Creates a new container out of it
Starts the container
When a Dockerfile is used to create an image:
It is already well known that the image will eventually be used to run a docker container.
After issuing docker build command, docker behind-the-scenes creates a running container with a base-file-system and follows steps inside the Dockerfile to configure that container as per the developers need.
After the container is configured with specs of the Dockerfile, it will be committed as an image.
The image gets ready to rock & roll!
Conclusion:
As we can see, a docker container is independent of a docker image.
A container can be restarted provided the unique ID of that container [use docker ps --all to get the id].
Any operation like making a new directory, creating files, installing tools, etc. can be done inside the container when it is running. Once the container is stopped, it persists all the changes. Container stopping and restarting is like rebooting a computer system.
An already created container is always available for a restart but when we issue docker run command, a new container is created out of an image and hence it is like a new computer system. The changes made inside the old container - as we can understand now - are not available in this new container.
A final note:
I guess it's now obvious why the data seems to be lost yet it is always there.. but in a different [old] container. So, take a good note of the difference in docker start & docker run command & never get confused in them.
I have got a much simpler answer to your question, run the following two commands
sudo docker run -t -d ubuntu --name mycontainername /bin/bash
sudo docker ps -a
the above ps -a command returns a list of all containers. Take the name of the container which references the image name - 'ubuntu' . docker auto generates names for the containers for example - 'lightlyxuyzx', that's if you don't use the --name option.
The -t and -d options are important, the created container is detached and can be reattached as given below with the -t option.
With --name option, you can name your container in my case 'mycontainername'.
sudo docker exec -ti mycontainername bash
and this above command helps you login to the container with bash shell. From this point on any changes you make in the container is automatically saved by docker.
For example - apt-get install curl inside the container
You can exit the container without any issues, docker auto saves the changes.
On the next usage, All you have to do is, run these two commands every time you want to work with this container.
This Below command will start the stopped container:
sudo docker start mycontainername
sudo docker exec -ti mycontainername bash
Another example with ports and a shared space given below:
docker run -t -d --name mycontainername -p 5000:5000 -v ~/PROJECTS/SPACE:/PROJECTSPACE 7efe2989e877 /bin/bash
In my case:
7efe2989e877 - is the imageid of a previous container running
which I obtained using
docker ps -a
You might want to look at docker volumes if you you want to persist the data in your container. Visit https://docs.docker.com/engine/tutorials/dockervolumes/. The docker documentation is a very good place to start
My suggestion is to manage docker, with docker compose. Is an easy to way to manage all the docker's containers for your project, you can map the versions and link different containers to work together.
The docs are very simple to understand, better than docker's docs.
Docker-Compose Docs
Best
the similar problem (and no way Dockerfile alone could fix it) brought me to this page.
stage 0:
for all, hoping Dockerfile could fix it: until --dns and --dns-search will appear in Dockerfile support - there is no way to integrate intranet based resources into.
stage 1:
after building image using Dockerfile (by the way it's a serious glitch Dockerfile must be in the current folder), having an image to deploy what's intranet based, by running docker run script. example:
docker run -d \
--dns=${DNSLOCAL} \
--dns=${DNSGLOBAL} \
--dns-search=intranet \
-t pack/bsp \
--name packbsp-cont \
bash -c " \
wget -r --no-parent http://intranet/intranet-content.tar.gz \
tar -xvf intranet-content.tar.gz \
sudo -u ${USERNAME} bash --norc"
stage 2:
applying docker run script in daemon mode providing local dns records to have ability to download and deploy local stuff.
important point: run script should be ending with something like /usr/bin/sudo -u ${USERNAME} bash --norc to keep container running even after the installation scripts finishes.
no, it's not possible to run container in interactive mode for the full automation matter as it will remain inside internal shall command prompt until CTRL-p CTRL-q being pressed.
no, if interacting bash will not be executed at the end of the installation script, the container will terminate immediately after finishes script execution, loosing all installation results.
stage 3:
container is still running in background but it's unclear whether container has ended installation procedure or not yet. using following block to determine execution procedure finishes:
while ! docker container top ${CONTNAME} | grep "00[[:space:]]\{12\}bash \--norc" -
do
echo "."
sleep 5
done
the script will proceed further only after completed installation. and this is the right moment to call: commit, providing current container id as well as destination image name (it may be the same as on the build/run procedure but appended with the local installation purposes tag. example: docker commit containerID pack/bsp:toolchained.
see this link on how to get proper containerID
stage 4: container has been updated with the local installs as well as it has been committed into newly assigned image (the one having purposes tag added). it's safe now to stop container running. example: docker stop packbsp-cont
stage5: any moment the container with local installs require to run, start it with the image previously saved.
example: docker run -d -t pack/bsp:toolchained
a brilliant answer here How to continue a docker which is exited from user kgs
docker start $(docker ps -a -q --filter "status=exited")
(or in this case just docker start $(docker ps -ql) 'cos you don't want to start all of them)
docker exec -it <container-id> /bin/bash
That second line is crucial. So exec is used in place of run, and not on an image but on a containerid. And you do it after the container has been started.
None of the answers address the point of this design choice. I think docker works this way to prevent these 2 errors:
Repeated restart
Partial error

Resources