Docker System has not been booted with systemd as init system - docker

I have an Ubuntu 18.04 image runing on my docker container. I login into it and installed Openresty. also installed systemd. When I use command systemctl I get this error:
System has not been booted with systemd as init system (PID 1). Can't operate.
How can I fix it?

If I understand the OP, he is trying to run systemctl within the container. This does not work because systemd is not running within the container to begin with. It cannot be done in unprivileged containers. There is another question here in SO about why he should not run systemd within a container.
I quickly googled and found this 2014 page about using systemd within a container in docker, where there is a short explanation. The fix is to use a privileged container (running docker run --privileged ...), which is arguably a bad idea but might suit the OP. There is a 2019 update of that last article, and the bottomline is they developed their own container engine (so no docker).
The obvious solution would be to have a single service, so no need for systemd, although that might not be possible in the OP's case.
In summary, possible solutions:
not to use systemd
use a privileged container
not to use docker

In your terminal, you can type:
$ sudo dockerd
and the magic is happen
So, Open other terminal and try it
$ docker ps -a
If you still have a problem with permission, run:
$ sudo usermod -aG docker your-user

Did you try to use: sudo /etc/init.d/docker start instead of systemd ?
I have a similar problem and it solves it.

You need to start your container by this command to enable systemd.
docker run -itd --privileged docker pull ubuntu:18.04 /usr/sbin/init

After toying with Systemd myself and bumping into this I found a good solution to work around this in Docker.
You can setup a cronjob to run on container reboot.
Dockerfile.yml:
COPY startup.sh /home/$USERNAME
WORKDIR /home/$USERNAME
RUN chmod +x startup.sh
RUN runuser -u $USERNAME -- echo "#reboot /home/$USERNAME/startup.sh" >> cronjobs
RUN runuser -u $USERNAME -- crontab cronjobs
RUN runuser -u $USERNAME -- rm cronjobs
https://askubuntu.com/questions/814/how-to-run-scripts-on-start-up#816

To complement #javier-gonzalez answer, if you're following running systemd within container AND getting the error bash: /usr/sbin/init: No such file or directory when trying to run the container, you can use /lib/systemd/systemd as ENTRYPOINT in your Dockerfile instead since /usr/sbin/init since it is just a symlink to the same thing.
FROM ubuntu:<anyversion>
ENTRYPOINT ["/lib/systemd/systemd"]

You may have forgotten to start docker before using it
sudo service docker start

Related

Dockerfile fails on build - cannot connect to docker daemon [duplicate]

I installed Docker in my machine where I have Ubuntu OS.
When I run:
sudo docker run hello-world
All is ok, but I want to hide the sudo command to make the command shorter.
If I write the command without sudo
docker run hello-world
That displays the following:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'.
The same happens when I try to run:
docker-compose up
How can I resolve this?
If you want to run docker as non-root user then you need to add it to the docker group.
Create the docker group if it does not exist
$ sudo groupadd docker
Add your user to the docker group.
$ sudo usermod -aG docker $USER
Log in to the new docker group (to avoid having to log out / log in again; but if not enough, try to reboot):
$ newgrp docker
Check if docker can be run without root
$ docker run hello-world
Reboot if still got error
$ reboot
Warning
The docker group grants privileges equivalent to the root user. For details on how this impacts security in your system, see Docker Daemon Attack Surface..
Taken from the docker official documentation:
manage-docker-as-a-non-root-user
After an upgrade I got the permission denied.
Doing the steps of 'mkb' post install steps don't have change anything because my user was already in the 'docker' group; I retry-it twice any way without success.
After an search hour this following solution finaly worked :
sudo chmod 666 /var/run/docker.sock
Solution came from Olshansk.
Look like the upgrade have recreate the socket without enough permission for the 'docker' group.
Problems
This hard chmod open security hole and after each reboot, this error start again and again and you have to re-execute the above command each time. I want a solution once and for all. For that you have two problems :
1) Problem with SystemD : The socket will be create only with owner 'root' and group 'root'.
You can check this first problem with this command :
ls -l /lib/systemd/system/docker.socket
If every this is good, you should see 'root/docker' not 'root/root'.
2 ) Problem with graphical Login : https://superuser.com/questions/1348196/why-my-linux-account-only-belongs-to-one-group
You can check this second problem with this command :
groups
If everything is correct you should see the docker group in the list.
If not try the command
sudo su $USER -c groups
if you see then the docker group it is because of the bug.
Solutions
If you manage to to get a workaround for the graphical login, this should do the job :
sudo chgrp docker /lib/systemd/system/docker.socket
sudo chmod g+w /lib/systemd/system/docker.socket
But If you can't manage this bug, a not so bad solution could be this :
sudo chgrp $USER /lib/systemd/system/docker.socket
sudo chmod g+w /lib/systemd/system/docker.socket
This work because you are in a graphical environnement and probably the only user on your computer.
In both case you need a reboot (or an sudo chmod 666 /var/run/docker.sock)
Add docker group
$ sudo groupadd docker
Add your current user to docker group
$ sudo usermod -aG docker $USER
Switch session to docker group
$ newgrp - docker
Run an example to test
$ docker run hello-world
Add current user to docker group
sudo usermod -aG docker $USER
Change the permissions of docker socket to be able to connect
to the docker daemon /var/run/docker.sock
sudo chmod 666 /var/run/docker.sock
I solve this error with the command :
$ sudo chmod 666 /var/run/docker.sock
It only requires the changes in permission of sock file.
sudo chmod 666 /var/run/docker.sock
this will work definitely.
If creating a docker group and adding your user to it doesn't work (the best solution, described in the previous answers), then this one is the second best alternative:
sudo chown $USER /var/run/docker.sock
What it does is changing the ownership of the docker.sock file to your user.
Note: It's a really bad practice to use chmod 666, because it gives permissions to practically everyone to access and modify the docker.sock file.
Fix Docker Issue: (Permission denied)
Create the docker group if it does not exist: sudo groupadd docker
See number of super users in the available system: grep -Po '^sudo.+:\K.*$' /etc/group
Export the user in linux command shell: export USER=demoUser
Add user to the docker group: sudo usermod -aG docker $USER
Run the following command/ Login or logout: newgrp docker
Check if docker runs ok or not: docker run hello-world
Reboot if you still get an error: reboot
If it does not work, run this command:
sudo chmod 660 /var/run/docker.sock
You can always try Manage Docker as a non-root user paragraph in the https://docs.docker.com/install/linux/linux-postinstall/ docs.
After doing this also if the problem persists then you can run the following command to solve it:
sudo chmod 666 /var/run/docker.sock
We always forget about ACLs . See setfacl.
sudo setfacl -m user:$USER:rw /var/run/docker.sock
To fix that issue, I searched where is my docker and docker-compose installed. In my case, docker was installed in /usr/bin/docker and docker-compose was installed in /usr/local/bin/docker-compose path. Then, I write this in my terminal:
To docker:
sudo chmod +x /usr/bin/docker
To docker-compose:
sudo chmod +x /usr/local/bin/docker-compose
Now I don't need write in my commands docker the word sudo
/***********************************************************************/
ERRATA:
The best solution of this issue was commented by #mkasberg. I quote comment:
That might work, you might run into issues down the road. Also, it's a security vulnerability. You'd be better off just adding yourself to the docker group, as the docs say. sudo groupadd docker, sudo usermod -aG docker $USER.
Docs: https://docs.docker.com/install/linux/linux-postinstall/
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/json: dial unix /var/run/docker.sock: connect: permission denied
sudo chmod 666 /var/run/docker.sock
This fix my problem.
ubuntu 21.04 systemd socket ownership
Let me preface, this was a perfectly suitable solution for me during local development and I got here searching for ubuntu docker permission error so i'll just leave this here.
I didn't own the unix socket, so I chowned it.
sudo chown $(whoami):$(whoami) /var/run/docker.sock
Another, more permanent solution for your dev environment, is to modify the user ownership of the unix socket creation. This will give your user the ownership, so it'll stick between restarts:
sudo nano /etc/systemd/system/sockets.target.wants/docker.socket
docker.socket:
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=YOUR_USERNAME_HERE
SocketGroup=docker
[Install]
WantedBy=sockets.target
Seriously guys. Do not add Docker in your groups or modifies the socket posix (without a hardening SELinux), it's a simple way to make a root privesc. Just add an alias in your .bashrc, it's simpler and safer as : alias dc='sudo docker'.
lightdm and kwallet ship with a bug that seems to not pass the supplementary groups at login. To solve this, I also, beside sudo usermod -aG docker $USER, had to comment out
auth optional pam_kwallet.so
auth optional pam_kwallet5.so
to
#auth optional pam_kwallet.so
#auth optional pam_kwallet5.so
in /etc/pam.d/lightdm before rebooting, for the docker-group to actually have effect.
bug: https://bugs.launchpad.net/lightdm/+bug/1781418 and here: https://bugzilla.redhat.com/show_bug.cgi?id=1581495
Rebooting the machine worked for me.
$ reboot
This work for me:
Get inside the container and modify the file's ACL
sudo usermod -aG docker $USER
sudo setfacl --modify user:$USER:rw /var/run/docker.sock
It's a better solution than use chmod.
use this command
sudo usermod -aG docker $USER
then restart your computer this worked for me.
you can follow these steps and this will work for you:
create a docker group sudo groupadd docker
add your user to this group sudo usermod -aG docker $USER
list the groups to make sure that docker group created successfully by running this command groups
run the following command also to change the session for docker group newgrp docker
change the group ownership for file docker.socksudo chown root:docker /var/run/docker.sock
change the ownership for .docker directory sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
finally sudo chmod g+rwx "$HOME/.docker" -R
After that test you can run docker ps -a
I ran into a similar problem as well, but where the container I wanted to create needed to mount /var/run/docker.sock as a volume (Portainer Agent), while running it all under a different namespace. Normally a container does not care about which namespace it is started in -- that is sort of the point -- but since access was made from a different namespace, this had to be circumvented.
Adding --userns=host to the run command for the container enabled it to use the attain the correct permissions.
Quite a specific use case, but after more research hours than I want to admit I just thought I should share it with the world if someone else ends up in this situation :)
i try this commend with sudo commend and it was ok.sudo docker pull hello-world or sudo docker run hello-world
In the Linux environment, after installing docker and docker-compose reboot is required for work docker better to avoid this issue.
$ sudo systemctl restart docker
It is definitely not the case the question was about, but as it is the first search result while googling the error message, I'll leave it here.
First of all, check if docker service is running using the following command:
systemctl status docker.service
If it is not running, try starting it:
sudo systemctl start docker.service
... and check the status again:
systemctl status docker.service
If it has not started, investigate the reason. Probably, you have modified a config file and made an error (like I did while modifying /etc/docker/daemon.json)
The Docker daemon binds to a Unix socket instead of a TCP port.
By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
To create the docker group and add your user:
Create the docker group
sudo groupadd docker
Add your user to the docker group
sudo usermod -aG docker $USER
Log out and log back in so that your group membership is re-evaluated.
If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect.
On a desktop Linux environment such as X Windows, log out of your session completely and then log back in.
On Linux, you can also run the following command to activate the changes to groups:
newgrp docker
Verify that you can run docker commands without sudo. The below command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits
docker run hello-world
If you initially ran Docker CLI commands using sudo before adding your user to the docker group, you may see the following error, which indicates that your ~/.docker/ directory was created with incorrect permissions due to the sudo commands.
WARNING: Error loading config file: /home/user/.docker/config.json -
stat /home/user/.docker/config.json: permission denied
To fix this problem, either remove the ~/.docker/ directory (it is recreated automatically, but any custom settings are lost), or change its ownership and permissions using the following commands:
sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "$HOME/.docker" -R
All other post installation steps for docker on linux can be found here https://docs.docker.com/engine/install/linux-postinstall/
The most straightforward solution is to type
sudo chmod 666 /var/run/docker.sock
every time you boot your machine. However, this method defeats any system security that may be in place and opens up the Docker socket to everybody. If this is acceptable to you -e.g.: the only user of your machine- then use it.
Nevertheless, it will be required every time you boot your machine, you can make it run with booting by adding
start on startup
task
exec chmod 666 /var/run/docker.sock
to the /etc/init/docker-chmod.conf file.
I tried all the described methods and nothing helped to solve the problem. The solution was to use the --use-drivers parameter when running selenoid and selenoid-ui. Below is the full listing of my Dockerfile.
FROM selenoid/chrome
USER root
RUN apt-get update
RUN apt-get -y install docker.io
RUN curl -s https://aerokube.com/cm/bash | bash
RUN ./cm selenoid start --vnc --use-drivers
RUN ./cm selenoid-ui start --use-drivers
EXPOSE 4444 8080
CMD ["-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video/"]
In my case it was the process itself (CI server agent) that was trying to run a docker command wasn't able to run it, but when I tried to run same command from within the same user it worked.
Restarting the daemon that runs CI server agent solved the problem.
The reason why command wasn't working from within agent before is because the agent was running before I installed docker and granted docker group permissions, and agent process used cached old permissions and was failing. Restarting the process dropped the cache and make things work out.
As a shortest answer for linux user ->
Simply try any command as super user with "sudo"
Eg:- sudo docker-compose up
After Docker Installation on Centos. While running below command I got below error.
[centos#aiops-dev-cassandra3 ~]$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.soc k/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
Change Group and Permission for docker.socket
[centos#aiops-dev-cassandra3 ~]$ ls -l /lib/systemd/system/docker.socket
-rw-r--r--. 1 root root 197 Nov 13 07:25 /lib/systemd/system/docker.socket
[centos#aiops-dev-cassandra3 ~]$ sudo chgrp docker /lib/systemd/system/docker.socket
[centos#aiops-dev-cassandra3 ~]$ sudo chmod 666 /var/run/docker.sock
[centos#aiops-dev-cassandra3 ~]$ ls -lrth /var/run/docker.sock
srw-rw-rw-. 1 root docker 0 Nov 20 11:59 /var/run/docker.sock
[centos#aiops-dev-cassandra3 ~]$
Verify by using below docker command
[centos#aiops-dev-cassandra3 ~]$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:c3b4ada4687bbaa170745b3e4dd8ac3f194ca95b2d0518b417fb47e5879d9b5f
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
[centos#aiops-dev-cassandra3 ~]$
After you installed docker, created 'docker' group and added user to it, edit docker service unit file:
sudo nano /usr/lib/systemd/system/docker.service
Add two lines into the section [Service]:
SupplementaryGroups=docker
ExecStartPost=/bin/chmod 666 /var/run/docker.sock
Save the file (Ctrl-X, y, Enter)
Run and enable the Docker service:
sudo systemctl daemon-reload
sudo systemctl start docker
sudo systemctl enable docker

Docker client under WSL2 doesn't work without sudo

On WSL2 (Ubuntu 20.04), I'm trying to connect to the Docker daemon that's running on Windows.
$ docker ps
Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
(exit code 1)
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
(exit code 0)
Why does it work with sudo, but not without sudo? How can I make it work without sudo?
I have done
$ sudo usermod -aG docker $USER
which ran successfully, but didn't help with the issue.
I have also restarted everything many times, which didn't help.
Weird solution for this one - but go ahead and try:
unset DOCKER_HOST
And if that works, you can make the fix permanent by going back and commenting out the "export DOCKER_HOST=tcp://localhost:2375" in your .bashrc file. I think it has something to do with how docker is configured in WSL 2 vs. WSL 1, but Docker never updated their documentation to reflect this.
For me below commands worked (execute them in order)
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
# And something needs to be done so $USER always runs in group `docker` on the `Ubuntu` WSL
sudo chown root:docker /var/run/docker.sock
sudo chmod g+w /var/run/docker.sock
Reference : https://github.com/rancher-sandbox/rancher-desktop/issues/1156#issuecomment-1017042882

How do you block console, root or other users, access to a docker container?

I tried installing puppet and changing the root user's shell to '/sbin/nologin' but I can still get right into the console?
It is a centOS 7 container.
Is Docker using a socket for the connection? Could I use selinux to block the socket? If I do I fear that I will also disable docker from being able to communicate with the container at all? I have been reading through Docker Security articles but have not found a good solution.
My end goal is for the container to be an ephemeral 'black box' when it comes up. My particular user case is a local web app, so no console access will be required.
You could try to remove all terminal commands (bash, sh, and so on) from the container:
docker exec [container-id] -it /bin/rm -R /bin/*
At that point you will not be able to use docker exec [container-id] -it bash to get a console to the container.
If you want to be more gentle about it you can only remove the shells you have (and leave all the other commands available (like the rm command):
docker exec [container-id] -it /bin/rm -R /bin/bash
docker exec [container-id] -it /bin/rm -R /bin/sh
... and so on

How to develop within docker image

I started to experiment with the docker but have some questions regarding how to develop on it and regarding its use cases. If anyone could guide me through these questions, it will be much appreciated.
First,
As far as I understood, docker is used mainly for developing applications on custom environments, thus avoiding the tidious installation processes. This is initially my intention, why I'd like to use docker for.
I've created a docker file which builds successfuly, and which has basic C++ development tools based upon library/gcc. I want to be able to develop in this docker container as you would do on your terminal.
What I did is I created a docker image from a Dockerfile. (I can observe that it is successfully created)
docker build -t mydockerimage .
Then run the docker in detached mode.
docker run -d mydockerimage
At this point, I am notified with the ID of the docker container. However docker container does not seem to be running when I check the output of:
docker container ls
Here comes the first question, why is my docker container not running?
To my understanding, simplest way to interact with the docker container is as follows:
docker exec -it <container_id_or_name> echo "Hello from container!"
Is this true? Is this a use case of docker in which I simply can start the container and exec some Linux command on it?
Moreover, I get a permission denied on /var/lib/docker.sock when I try to execute docker commands without sudo. What am I missing here?
Thank you in advance.
Do you provide an entrypoint or CMD in your dockerfile? This will be executed inside your container and keeps the container running. You can find some details here.
In short. Docker has a default entrypoint: /bin/sh -c, but no default CMD.
Check the dockerfile of ubuntu. This has bash as CMD so it's executing /bin/sh -c bash.
$ docker run -it ubuntu bash
root#9855e779cab2:/#
This will result in an interactive shell in which you can execute commands like on an ubuntu. If you exit the container the container will stop running.
To keep a container running you can use the -d option. It will run the container in the background as a daemon:
$ docker run -d -it ubuntu bash
2606ad8e095baa0237cc30e599a26a4d727d99d47392d779fb83cd50f1a39614
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2606ad8e095b ubuntu "bash" 18 seconds ago Up 17 seconds cranky_johnson
Now you can exec inside the container to "go inside" the container and execute ubuntu commands.
$ docker exec -it 2606ad8e095b bash
root#2606ad8e095b:/#
When you exit the container it remains running in the background.
Now we can execute your command too:
$ docker exec -it 2606ad8e095b echo "Hello from container!"
Hello from container!
This will open a bash session in your container and echo the string.
I think it's important in your case you define some entrypoint (which can also be a script) or a CMD. Probably you need something very similar to Ubuntu when you just want to use bash inside your container.
Moreover, I get a permission denied on /var/lib/docker.sock when I try to execute docker commands without sudo. What am I missing here?
This is normal. The Docker daemon currently requires root privileges. So you have to use docker with your root user or users which have root priviledges and you have to add sudo every time. You can add your user to a docker group. Every time the daemon starts, it makes the ownership of the Unix socket read/writable by the docker group. This means you can use docker without using sudo everytime when that user is inside your docker group.
To add your user to the docker group:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ exit
ssh back or open new shell

Installing systemd inside a ubuntu14.04 docker container - Is it possible?

Am trying to install and configure openstack (devstack) inside docker container. While installing am getting the following error
"Failed to get D-Bus connection: No connection to service manager."
Later, I checked and found that its because of systemd problem. When I tried executing the command systemd
$>systemd
Am getting the following output.
Trying to run as user instance, but the system has not been booted with systemd.
Following are the things which am used.
Host machine OS : Ubuntu 14.04,
Docker Version : Docker version 1.12.4, build 1564f02,
Docker Container OS : Ubuntu 14.04
Can anyone help in this. Thanks in advance.
First of all, systemd expects /sys/fs/cgroup to be mounted. Additionally, you must make the container privileged, or else this happens:
docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged -it --rm ubuntu
Then you can go ahead and run /bin/systemd --system --unit=basic.target from bash, and it should run normally (with some errors of course, because Docker does not virtualize an entire system, nor is the library:ubuntu image more than the minimum size required to run properly):
After you have systemd running (semi-)properly, you can simply use a docker stop to stop the container.
This post is based on my own research, a few weeks of it too, for a project I like to call initbuntu (originally I tried to get init running, but running systemd directly was my only solution after all my failed tries). The container will be available on Docker Hub as logandark/initbuntu, Soon™. For now, a broken copy (or not broken, I dunno) is available there at the time of posting.
Sources (kinda):
/sys/fs/cgroup: Here
systemd --system: A StackOverflow post I lost the link to.
Existing DevStack on Docker Project
First of all, you can get a preconfigured Dockerfile with DevStack Ocata/Pike on Docker here. The repository also contains further information on DevStack and containers.
Build Your Own Image
Running systemd in Docker is certainly possible and has been done before. I found Ubuntu 16.04 LTS is a good foundation for the Docker host as well as the base image.
Your systemd/DevStack Dockerfile needs this configuration, which also cleans up services you probably don't want inside a Docker container:
FROM ubuntu:16.04
#####################################################################
# Systemd workaround from solita/ubuntu-systemd and moby/moby#28614 #
#####################################################################
ENV container docker
# No need for graphical.target
RUN systemctl set-default multi-user.target
# Gracefully stop systemd
STOPSIGNAL SIGRTMIN+3
# Cleanup unneeded services
RUN find /etc/systemd/system \
/lib/systemd/system \
-path '*.wants/*' \
-not -name '*journald*' \
-not -name '*systemd-tmpfiles*' \
-not -name '*systemd-user-sessions*' \
-exec rm \{} \;
# Workaround for console output error moby/moby#27202, based on moby/moby#9212
CMD ["/bin/bash", "-c", "exec /sbin/init --log-target=journal 3>&1"]
If you intend to run OpenStack/DevStack inside said container, it might save you lots of trouble to start it privileged instead of defining separate security capabilities and volumes:
docker run \
--name devstack \
--privileged \
--detach \
image
To get a bash inside your new systemd container try this:
docker exec \
--tty \
--interactive \
devstack \
bash
Systemd should work inside properly configured container. You can run the container in privileged mood to run systemd.
"Systemd cannot run without SYS_ADMIN, less privileges than that won't work (see #2296 (comment)). Yes it's possible to make it "easier" (a tool that automatically sets these), but it'll still need certain privileges"
See this Github issue
After all docker is an application container and it runs the process which you specify at run time , after completing that process it will exit. May be you need an OS container or a virtual machine for your use case. See OS container vs Application Container here
In most cases the error messages comes up because an installer program has tried to run "systemctl start ". Unlike initscripts the systemctl command will not try execute the start script directly - instead it tries to contact the systemd daemon to execute the start sequence of the service. So all services have a common parent in the systemd daemon.
It can be quite overdone to run a systemd daemon inside a docker container just to start a service. You could use the systemctl-docker-replacement overwriting /usr/bin/systemctl in which case the target service is started without the help of a systemd daemon. It runs the ExecStart from the *.service file directly.

Resources