"systemctl" command doesn't work on centos with docker - docker

I use docker with centos 8.
How can i use systemctl command in dockerfile please ?
When i install an app it needs systemctl.
I have an error:
System has not been booted with systemd as init system (PID 1). Can't
operate. Failed to connect to bus: Host is down
I build docker like this:
docker build -t myapp:11 .
Same when i try in container:
docker run -it --privileged app:11 /bin/bash
Thank you.
docker build -t nuance:11 .
docker run -it --cap-add=NET_ADMIN nuance:11 /bin/bash
# syntax=docker/dockerfile:1
FROM centos:latest
USER root
RUN cd /etc/yum.repos.d/
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
RUN yum -y update && \
yum clean all
RUN yum -y install \
java-11-openjdk-devel \
perl-Data-Dumper \
redhat-lsb-core.x86_64 \
glibc.x86_64 \
glibc.i686 \
libstdc++.x86_64 \
libstdc++.i686 \
openssl \
libgcc \
libgcc.i686 \
libaio.x86_64 \
libaio.i686 \
libnsl.i686 \
ncurses-libs \
httpd.x86_64 \
unzip \
-x postfix \
-x mariadb-libs \
zlib.i686 \
zlib.x86_64
WORKDIR /tmp
COPY Nuance_Speech_Suite-11.0.10-x86_64-linux.tgz ./Nuance_Speech_Suite-11.0.10-x86_64-linux.tgz
COPY NRec-fr-FR-10.0.0-10.1.0.i686-linux.tar.gz ./languages/NRec-fr-FR-10.0.0-10.1.0.i686-linux.tar.gz
COPY NVE_fr_FR_audrey-ml_xpremium-2.1.0_linux.zip ./languages/NVE_fr_FR_audrey-ml_xpremium-2.1.0_linux.zip
COPY NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-1_linux.zip ./languages/NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-1_linux.zip
COPY NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-2_linux.zip ./languages/NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-2_linux.zip
COPY nuance.lic ./nuance.lic
RUN tar -zxf Nuance_Speech_Suite-11.0.10-x86_64-linux.tgz
RUN tar -zxf languages/NRec-fr-FR-10.0.0-10.1.0.i686-linux.tar.gz
RUN unzip languages/NVE_fr_FR_audrey-ml_xpremium-2.1.0_linux.zip
RUN unzip languages/NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-1_linux.zip
RUN unzip languages/NRec-fr-FR-10.0.0-10.1.0-CumulativePatch-2_linux.zip
WORKDIR /tmp/Nuance_Speech_Suite-11.0.10
RUN ./setup.sh -s -f "/tmp/nuance.lic" -j "/usr/lib/jvm/java-11-openjdk" -V "/tmp/languages" -I "NLM,NSS"
last lines of log
2022-12-16 09:22:11 setup.sh: info: Restarting the Nuance License Manager service
2022-12-16 09:22:11 setup.sh: info: starting command 'systemctl restart nuance-licmgr'; output sent to log
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
2022-12-16 09:22:11 setup.sh: info: Command 'systemctl restart nuance-licmgr' returned 1
2022-12-16 09:22:11 setup.sh: error: install_postprocessing_nlm_startservices() failed to start services
2022-12-16 09:22:11 setup.sh: info: skipping invocation of install_postprocessing_nms() due to previous post processing errors
2022-12-16 09:22:11 setup.sh: info: Skipping install_execute_installsuite due to previous errors

You can't run systemctl in a Dockerfile at all. More broadly, commands like systemctl or service don't work well in Docker, and you should restructure your container to avoid them.
For systemctl more specifically, it tries to connect to the systemd daemon. In a Dockerfile, each RUN step occurs in a new container, and like other containers, that container only runs the one RUN command; it does not run systemd or any other typical Linux daemons. Furthermore, at the end of the RUN line, the filesystem is persisted but any other changes are lost, so even if you systemctl start something successfully, the image won't contain a running process.
More generally I'd recommend avoiding systemd in Docker. A minimal init system like tini can be a good idea for some problems like reaping zombie processes; if you must run multiple processes in one container and really can't refactor it then supervisord can fill this need. A typical systemd installation will want to configure kernel parameters, start terminal logins, mount filesystems, and configure the network, all of which are basically impossible in Docker; it will capture the main process's stdout so docker logs doesn't work.
Aim for your container to only have one process. Don't run an init system at all if you don't need to. Don't try to "start a service", just run the program you're trying to build in the foreground as the one thing the container does.
FROM some-base-image
RUN a command to install the software
CMD the_program
# with no `systemctl` anywhere

if you are facing following error when running docker -
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'
or
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
Run following commands
$ sudo systemctl status docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
The reason is that you are trying to use systemd command to manage services on Linux but your system doesn't use systemd and (most likely) using the classic SysV init (sysvinit) system.
run following command to confirm if its above case
$ ps -p 1 -o comm=
init
so now you check again the status using
$ sudo service docker status
* Docker is not running
you can start docker using the following command
sudo service docker start
* Starting Docker: docker
for more detail pls refer following link
https://linuxhandbook.com/system-has-not-been-booted-with-systemd/
Systemd command
Sysvinit command
systemctl start service_name
service service_name start
systemctl stop service_name
service service_name stop
systemctl restart service_name
service service_name restart
systemctl status service_name
service service_name status
systemctl enable service_name
chkconfig service_name on
systemctl disable service_name
chkconfig service_name off

Related

Enable systemctl in Docker container

I am trying to create my own docker container, and custom service which I created for my work, this is my service file
[1/1] /etc/systemd/system/qsinavAI.service
[Unit]
Description=uWSGI instance to serve Qsinav AI
After=network.target
[Service]
User=www-data
Group=www-data
WorkingDirectory=/root/AI/
Environment="PATH=/root/AI/bin"
ExecStart=/root/AI/bin/uwsgi --ini ai.ini
[Install]
WantedBy=multi-user.target
and when I am trying to run this service I get this error
System has not been booted with systemd as init system (PID 1). Can't
operate. Failed to connect to bus: Host is down
I searched a lot to find a solution but I could not, how can I enable the systemctl in docker.
this is the command that I am using to run the container
docker run -dt -p 5000:5000 --name AIPython2 --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro --cap-add SYS_ADMIN last_python_image
If your application is only ever run inside a container then you should create a docker-entrypoint.sh script with an "exec" at the end so that your application is run as a remapped PID 1 in the container. That way cloud systems can see if the application is alive and they can send a SIGTERM to stop the application.
#! /bin/bash
cd /root/AI
PATH=/root/AI/bin
exec /root/AI/bin/uwsgi --ini ai.ini
If your application shall be able to run in systemd environment outside of a container then you can choose to reuse the systemd descriptor. It requires an init-daemon on PID 1 and a service manager to check the "enbabled" services. One example would be the systemctl-docker-replacement script.
Docker containers should have an "entrypoint" command that runs in foreground to keep the container running. The basic idea behind a container is that it runs as long as the root process that started it, keeps running. Since you will issue a systemctl start qsinavAI.service, the command will succeed but once this command exits, the container will stop.
By design, containers started in detached mode exit when the root process used to run the container exits, ...
See some reference about this and starting nginx service in the official documentation.
So instead of trying to run your application as a service, you should have an entrypoint statement at the end of your Dockerfile. Then when you start this container with docker run, you can specify -d to run it in "detached" mode.
Example, taking the command from ExecStart and assuming it runs in foreground:
ENTRYPOINT ["/root/AI/bin/uwsgi", "--ini", "ai.ini"]
Exemple how to create image with systemd and boot like a real environment. A Dockerfile is required.
FROM ubuntu:22.04
RUN echo 'root:root' | chpasswd
RUN printf '#!/bin/sh\nexit 0' > /usr/sbin/policy-rc.d
RUN apt-get update
RUN apt-get install -y systemd systemd-sysv dbus dbus-user-session
ENTRYPOINT ["/sbin/init"]
/sbin/init is important to init systemd and enable systemctl.
Then build the system.
docker build -t testimage -f Dockerfile .
docker run -it --privileged --cap-add=ALL testimage

Start ssh using systemctl inside the docker container

I' m a beginner in the Docker;
I have pulled a CentOS 7 image from Hub and ran it ;
I need to ssh in to the docker container(CentOS 7) from my host.
Got the docker container's IP using docker inspect container-id
I have installed the following using
initscripts
systemd.x86_64
systemd-libs.x86_64
open-ssh
firewalld
net-tools
when i tried to start the firewall to open the port for ssh(22)
[root#a6f3e3eb095c ~]# systemctl start firewall
Failed to get D-Bus connection: Operation not permitted
Also tried,
[root#a6f3e3eb095c ~]# /usr/lib/systemd/systemd --system &
[1] 353
[root#a6f3e3eb095c ~]# systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization xen.
Detected architecture x86-64.
Welcome to CentOS Linux 7 (Core)!
Set hostname to <a6f3e3eb095c>.
Cannot determine cgroup we are running in: No such file or directory
Failed to allocate manager object: No such file or directory
[1]+ Exit 1 /usr/lib/systemd/systemd --system
How to start the firewall/ssh inside the docker container ?
inside docker container run following commands :
yum update -y glibc-common
yum install -y sudo passwd openssh-server openssh-clients tar screen crontabs strace telnet perl libpcap bc patch ntp dnsmasq unzip pax which
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install -y hiera lsyncd sshpass rng-tools
service sshd start;
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config;
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config;
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config;
sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Base.repo
mkdir -p /root/.ssh/;
rm -f /var/lib/rpm/.rpm.lock;
echo "StrictHostKeyChecking=no" > /root/.ssh/config;
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config
echo "root:password" | chpasswd
( or )
Simply you can pull docker image of centos with ssh in docker hub
https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=centos+ssh&starCount=0
https://hub.docker.com/r/kinogmt/centos-ssh/
https://hub.docker.com/r/jdeathe/centos-ssh/
You can avoid the "Failed to get D-Bus connection: Operation not permitted" / aka installing systemd inside a docker by using the https://github.com/gdraheim/docker-systemctl-replacement ... after that the docker-exec stuff should be all fine to do things inside a container.
If you really do need an ssh or sftp container, then you can use my Docker Image as a source image for your own or run it directly:
If using the official CentOS-7 Image and you require systemd, there are instructions on how to enable it under the section "Systemd integration".
However, based on the following:
I need to ssh in to the docker container(CentOS 7) from my host.
You can use docker exec to run commands in a running, (backgrounded), container so, for images that have bash available, you can access an interactive tty and run bash as follows from your host - where container can be either the name or id:
docker exec --tty --interactive <container> bash
OR
docker exec -ti <container> bash
Finally, it's unlikely to be necessary to install the firewall package in your image as the operator will decide what ports to publish from those which are exposed and you can make use of Docker Networking to only expose the necessary public facing services.
If you are using the Docker CLI, then you can get into the Docker container using the following command
docker exec -it containerId bash
I am not sure how to ssh into the docker container, but if you want to do basic operation inside the Docker container, you can make use of the above docker command.

Docker - Process not starting at boot

I am containerizing the latest version of grafana and want to start the grafana-process when the container starts and then use it in my K8S (kubernetes) cluster.
My Dockerfile looks like :
FROM armdocker/baseimages/rhel:7-20161207
MAINTAINER xxxxxxxx
ENV GRAFANA_VERSION_MAJOR=4 GRAFANA_VERSION_MINOR=4 GRAFANA_VERSION_PATCH=3-1
ENV GRAFANA_VERSION=${GRAFANA_VERSION_MAJOR}.${GRAFANA_VERSION_MINOR}.${GRAFANA_VERSION_PATCH}
RUN yum clean all && yum install -y unzip tar
RUN curl -f -L -o grafana-${GRAFANA_VERSION}.x86_64.rpm https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-${GRAFANA_VERSION}.x86_64.rpm && \
yum localinstall grafana-${GRAFANA_VERSION}.x86_64.rpm -y
EXPOSE 3000
ENTRYPOINT ["/etc/init.d/grafana-server start"]
Building the Dockerfile is successful and returns no errors.
When I try to run this image, I get the ERROR.
docker run -dit -p 3000:3000 armdocker/proj/grafana:1.0.5
471b2acb964caad69bbb78831a59ee9d2b27997911b5b104b0057ddc957d1101
Error response from daemon: Cannot start container 471b2acb964caad69bbb78831a59ee9d2b27997911b5b104b0057ddc957d1101: [8] System error: exec: "/etc/init.d/grafana-server start": stat /etc/init.d/grafana-server start: no such file or directory
This seems to be very weird since I am installing the RPM first (which makes the file /etc/init.d/grafana-server ) and then I am trying to start the process as my ENTRYPOINT
I then tried
CMD ["/etc/init.d/grafana-server start"]
This also results in the same ERROR /etc/init.d/grafana-server start: no such file or directory
I then tried using the systemctl command :
docker run -dit -p 3000:3000 armdocker/proj/grafana:1.0.6
bfd492c75a0f4c284fc0fdbd5a590f0155f6f67bcb4834e144f344bb789546f3
Error response from daemon: Cannot start container bfd492c75a0f4c284fc0fdbd5a590f0155f6f67bcb4834e144f344bb789546f3: [8] System error: exec: "/bin/systemctl start grafana-server.service": stat /bin/systemctl start grafana-server.service: no such file or directory
I am out of ideas as to what am I doing wrong to have a container with a started grafana process.
Unless you're running your own systemd daemon inside of the container (I don't recommend this, it creates lots of issues), you shouldn't be trying to start the process with a systemctl or /etc/init.d command. Containers are not a VM, they are a method to run an application within their own namespace. And when that application exits, so do your container. When your application is something like a systemctl start command, your container will exit the moment that systemctl command returns, which isn't useful it you were hopping it would stay up for the duration of the grafana process running.
Rather than trying to reinvent the wheel, I'd recommend you look at how grafana themselves packages their docker container. Specifically their run.sh ends with:
exec gosu grafana /usr/sbin/grafana-server \
--homepath=/usr/share/grafana \
--config=/etc/grafana/grafana.ini \
cfg:default.log.mode="console" \
cfg:default.paths.data="$GF_PATHS_DATA" \
cfg:default.paths.logs="$GF_PATHS_LOGS" \
cfg:default.paths.plugins="$GF_PATHS_PLUGINS" \
"$#"
Their repo is over at https://github.com/grafana/grafana-docker
As an alternative you could use the docker-systemctl-replacement script and register it as the main CMD of the image. It will check out the *.service scripts to know how to start and stop a service (without the help of a systemd daemon). So if the Grafana guys change their startup scenario then your builds will continue to work. ;)

docker inside docker container

I want to install docker inside a running docker container.
docker run -it centos:centos7
My base container is using centos, I can login to running container using docker exec. But when I try to install docker inside it using yum install -y docker it installs.
But somehow I can't start the docker service with docker -d &, it gives me error as:
INFO[0000] Option DefaultNetwork: bridge
WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: , error: exit status 1
FATA[0000] Error starting daemon: Error initializing network controller: Error initializing bridge driver: Setup IP forwarding failed: open /proc/sys/net/ipv4/ip_forward: read-only file system
Is there a way I can install docker inside docker container or build image already having running docker? I have already seen these examples but none works for me.
The output of uname -r on the host machine:
[fedora# ~]$ uname -r
4.2.6-200.fc22.x86_64
Any help would be appreciated.
Thanks in advance
Update
Thanks to https://stackoverflow.com/a/38016704/372019 I want to show another approach.
Instead of mounting the host's docker binary, you should copy or install a container specific release of the docker binary. Since you're only using it in a client mode, you won't need to install it as a system service. You still need to mount the Docker socket into the container so that you can easily communicate with the host's Docker engine.
Assuming that you got a base image with a working Docker binary (e.g. the official docker image), the example now looks like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
docker:1.12 docker info
Without actually answering your question I'd suggest you to read Using Docker-in-Docker for your CI or testing environment? Think twice.
It explains why running docker-in-docker should be replaced with a setup where Docker containers run as siblings of the "outer" or "base" container. The article also links to the original https://github.com/jpetazzo/dind project where you can find working examples how to run Docker in Docker - in case you still want to have docker-in-docker.
An example how to enable a container to access the host's Docker daemon look like this:
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
If you are on Mac with Docker toolbox.
The below command WON’T WORK
docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v /usr/bin/docker:/usr/bin/docker\
busybox:latest /usr/bin/docker info
Because /var/run/docker.sock will not be on your OSX filesystem
the Docker daemon is running inside the boot2docker VM - and that's where the unix socket is.
So you have to run the container from boot2docker VM
$ docker-machine ssh default
$ docker run\
-v /var/run/docker.sock:/var/run/docker.sock\
-v $(which docker):/usr/bin/docker\
busybox:latest /usr/bin/docker info
$ exit
This looks like Docker-in-Docker, feels like Docker-in-Docker, but it’s not Docker-in-Docker, when this container will create more containers, those containers will be created in the top-level Docker.
You need the --privileged parameter.
By default, Docker containers are “unprivileged” and cannot, for
example, run a Docker daemon inside a Docker container.
Source
Run your base image with the command docker run --privileged -it centos:centos7 bash. Then you may install and run another docker container inside that container.
I`ve a similar problems in my vms.
I`ve solve the problem with change the storage file system from image to vfs(in daemon.json file)
like the image bellow
For image works first create a base image, in my case with centos7
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
with this image builded (in my case i called local/c7-systemd) create a second image, installing docker and moving daemon.json to inside.
FROM local/c7-systemd
RUN yum install -y yum-utils
RUN yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
RUN yum install -y docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.28.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
RUN ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
COPY daemon.json /etc/docker/daemon.json
RUN yum install -y nano
RUN systemctl enable docker
EXPOSE 80
EXPOSE 8080
EXPOSE 8161
EXPOSE 6379
EXPOSE 8761
CMD ["/usr/sbin/init"]
enjoy!

How to 'avahi-browse' from a docker container?

I'm running a container based on ubuntu:14.04, and I need to be able to use avahi-browse inside it. However:
(.env)root#8faa2c44e53e:/opt/cluster-manager# avahi-browse -a
Failed to create client object: Daemon not running
(.env)root#8faa2c44e53e:/opt/cluster-manager# service avahi-daemon status
Avahi mDNS/DNS-SD Daemon is running
The actual problem I have is a pybonjour error; pybonjour.BonjourError: (-65537, 'unknown') but I've read that is linked to the problem with the avahi-daemon.
So; how do I connect to the avahi-daemon from the container ?
P.S. I have to switch dbus off in the avahi-daemon.conf fill to make it possible to start it, otherwise avahi-daemon won't start, with a dbus error like this:
(.env)root#8faa2c44e53e:/opt/cluster-manager# avahi-daemon
Found user 'avahi' (UID 103) and group 'avahi' (GID 107).
Successfully dropped root privileges.
avahi-daemon 0.6.31 starting up.
dbus_bus_get_private(): Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
WARNING: Failed to contact D-Bus daemon.
avahi-daemon 0.6.31 exiting.
As far I can test you can use host's avahi-daemon through Unix socket for mDNS to resolve and /var/run/dbus for avali-browse to work.
E.g.:
docker run -v /var/run/dbus:/var/run/dbus -v /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket -ti debian:10-slim bash
To test inside container:
apt-get update && apt-get install avahi-utils iputils-ping -y
ping whatever.local
avahi-browse -a
Avahi requires D-BUS in order to communicate with clients. Sounds like your docker container isn't starting the system D-BUS. If you do that, then Avahi should work.
You need D-BUS for most of Avahi's functionality (including avahi-browse) so disabling it won't really help.
There is a docker image supposedly supporting avahi from within the container. The trick seems to be to mount /var/run/dbus from the host into the container.
Note that I couldn't make it work to run this image on my 16.04. host.
I ran into the same problem getting avahi and dbus to operate correctly on Ubuntu 14.04 (specifically, I was trying to use ROS TurtleBot). I solved it by incorporating a modified version of the instructions in docker-systemd into my Dockerfile:
FROM ubuntu:14.04
RUN apt-get update &&\
apt-get install -y avahi-utils avahi-daemon libnss-mdns systemd
RUN cd /lib/systemd/system/sysinit.target.wants/;\
ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1 \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*; \
rm -f /lib/systemd/system/plymouth*; \
rm -f /lib/systemd/system/systemd-update-utmp*
RUN mkdir -p /var/run/dbus
ENV init /lib/systemd/systemd
After modifying your Dockerfile to include these instructions, you should create a container using the following command:
docker run --rm --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -it <DOCKER_IMAGE> /bin/bash
Finally, once you're inside the container, you must execute the following commands before attempting to use avahi-browse (directly or indirectly):
$ dbus-service --system
$ /etc/init.d/avahi-daemon start
Another solution is to use mdns-repeater on the host to forward mDNS packets to the Docker network
mdns-repeater eth1 docker0
I needed to add 2 parameters in my call to docker run command for avahi-browse -at command to run inside the container:
--privileged and -v /var/run/dbus:/var/run/dbus

Resources