Issues running a docker container with a GUI application with host network mode - docker

I am trying to create a docker container with a ROS install and a simulation setup to streamline the process for people joining the project later.
When I run rviz this way, I get the rviz window showing up on my host just fine, as expected following this ros tutorial:
sudo docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" xrf-robot-repo rviz
ROS Master isn't running so this output is expected
Now my issue is that I want to run my container in the host network mode (--net=host) the rviz dialogue does not show up anymore. Here's what I run this:
sudo docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" --net=host xrf-robot-repo rviz I don't think these errors have anything to do with the gui window not showing up
I have no idea why the GUI window does not show up. I was hoping for some guidance here. I would guess this would have something to do with the different network mode affecting how the x11 forwarding may work, but I am not sure how to further look into this.
Here's what my Dockerfile looks like that I used to build the image in case it may be helpful:
FROM osrf/ros:melodic-desktop-full
SHELL ["/bin/bash", "-c"]
RUN apt-get update && apt-get install -y --no-install-recommends \
git apt-utils python3-catkin-tools\
&& rm -rf /var/lib/apt/lists/*
RUN source ./ros_entrypoint.sh && git clone https://github.com/RumailM/xrf-robot-stack
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && catkin_init_workspace
RUN apt-get update
ARG DEBIAN_FRONTEND=noninteractive
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && rosdep install
--from-paths src --ignore-src -r -y
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && catkin build
The reason I need to use network mode is that I would like the host to be able to communicate with the rosmaster node and any other nodes within the container. I also do not know before hand what nodes may exist outside the container and at what ports they may communicate on, so the obvious answer of forwarding only the ports that I will use will not work (the ports may change at runtime). Forwarding large ranges of ports does not seem viable either.
Any guidance is appreciated!

You should add --privileged option to the docker run command. This is related to this issue Qt applications and network host .
I ran into a similar issue and was able to resolve by following this question on ROS Answers.
Also to be able to run rviz properly, I needed the docker container to access my graphics card drivers(NVIDIA) in my case. So I needed to add -e NVIDIA_VISIBLE_DEVICES=all and -e NVIDIA_DRIVER_CAPABILITIES=all and --runtime nvidia to the docker run command.
This was the final run command
docker run -it --rm \
--privileged \
--network host \
-e NVIDIA_VISIBLE_DEVICES=all \
-e NVIDIA_DRIVER_CAPABILITIES=all \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--name "$CONTAINER_NAME" \
--runtime nvidia \
xrf-robot-repo \
/bin/bash
Hope this helps

Related

Jest-dynamoDB connection gets refused inside of docker container

I have a suite of tests written in Jest for dynamoDB that use the dynamodb-local instance as explained here using this dependency. I use a custom-built Docker image which builds a container within which the tests are executed.
Here's the Dockerfile
FROM openjdk:8-jre-alpine
RUN apk -v --no-cache add \
curl \
build-base \
groff \
jq \
less \
py-pip \
python openssl \
python3 \
python3-dev \
yarn \
&& \
pip3 install --upgrade pip awscli boto3 aws-sam-cli
EXPOSE 8000
I yarn install all of my dependencies and then yarn test, at this point after a long time it will output this:
Error
This is the command I ma using:
docker run -it --rm -p 8000:8000 -v $(pwd):/data -w /data aws-cli-java8-v15:latest
The tests work completely fine on my own machine, but no matter what project I use or what I include in my Dockerfile connection always gets dropped.
I solved the issue, turns out it has to do with Alpine Linux. Because it uses musl instead of Glibc local dynamodb won't be able to start and it will crash a few seconds after it was executed without outputting any error messages. The solution is to either use OracleJDK on alpine, which is hard enough given their new license or using any other OS that does use glibc with OpenJDK. Or you could try to install glibc on Alpine and try to link it to your OpenJDK, but it's not a terribly good idea.

Running systemd in docker container causes host crash

I'm trying to create a systemd based docker container, but when I try running the built container my system crashes. I think running init in the container might be causing the conflict, and is somehow conflicting with systemd on my host.
When I try to run the docker container I am logged out of my account and briefly see what looks like my system going through a boot process. My host is running Arch Linux, with linux 4.20.7.
It is only when I attempt to "boot" the container by running systemd via /sbin/init, that the problem occurs.
docker run -it \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:rw \
--privileged 66304e3bc48
Dockerfile (adapted from solita/ubuntu-systemd):
FROM ubuntu:18.04
# Don't start any optional services.
RUN find /etc/systemd/system \
/lib/systemd/system \
-path '*.wants/*' \
-not -name '*journald*' \
-not -name '*systemd-tmpfiles*' \
-not -name '*systemd-user-sessions*' \
-exec rm \{} \;
RUN apt-get update && \
apt-get install --yes \
python sudo bash ca-certificates dbus systemd && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN systemctl set-default multi-user.target
RUN systemctl mask dev-hugepages.mount sys-fs-fuse-connections.mount
STOPSIGNAL SIGRTMIN+3
# Workaround for docker/docker#27202, technique based on comments from docker/docker#9212
CMD ["/bin/bash", "-c", "exec /sbin/init --log-target=journal 3>&1"]
I would expect the container to just boot up running systemd, and I'm not your what I might be doing wrong.
Docker do not want to include by default Systemd inside docker because it publish itself as an application container (which means one application per container). There is an other type of containers called system containers. the most known are OpenVZ , LXC/LXD and Systemd-nspawn. all those will run a full OS with systemd as if it were a virtual machine.
and using systemd inside docker is so dangerous when compared to running systemd inside LXD
there is even a new baby called Podman which is a clone of docker but it uses by default systemd inside when you install it or when you use an image which contain systemd already like those of ubuntu cloud images http://uec-images.ubuntu.com/releases/server/bionic/release/
so my advice is to test LXD and systemd-nspawn ; and keep an eye on Podman it solves what docker do not want to solve; read this to understand https://lwn.net/Articles/676831/
references:
https://coreos.com/rkt/docs/latest/rkt-vs-other-projects.html
https://podman.io/slides/2018_10_01_Replacing_Docker_With_Podman.pdf
https://containerjournal.com/features/system-containers-vs-application-containers-difference-matter
Runtimes And the Curse of the Privileged Container
https://brauner.github.io/2019/02/12/privileged-containers.html
I ended up using the paulfantom/ubuntu-molecule Docker image.
Currently it looks like they're just installing systemd, setting some environment variables, and using the systemd binary directly as the entry point. It seems to work without the issues I mentioned in the original post.
Dockerfile
FROM ubuntu:18.04
ENV container docker
ENV LC_ALL C
ENV DEBIAN_FRONTEND noninteractive
RUN sed -i 's/# deb/deb/g' /etc/apt/sources.list
# hadolint ignore=DL3008
RUN apt-get update \
&& apt-get install -y --no-install-recommends systemd python sudo bash iproute2 net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# hadolint ignore=SC2010,SC2086
RUN cd /lib/systemd/system/sysinit.target.wants/ \
&& ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
/etc/systemd/system/*.wants/* \
/lib/systemd/system/local-fs.target.wants/* \
/lib/systemd/system/sockets.target.wants/*udev* \
/lib/systemd/system/sockets.target.wants/*initctl* \
/lib/systemd/system/basic.target.wants/* \
/lib/systemd/system/anaconda.target.wants/* \
/lib/systemd/system/plymouth* \
/lib/systemd/system/systemd-update-utmp*
RUN systemctl set-default multi-user.target
ENV init /lib/systemd/systemd
VOLUME [ "/sys/fs/cgroup" ]
ENTRYPOINT ["/lib/systemd/systemd"]
To "match the host as close as possible" was the original goal of the docker-systemctl-replacement script. You can test drive scripts in a container that may be run later on a virtual machine. It allows to do some systemctl commands without an active systemd daemon.
It can also serve as an init daemon if you wish to. A systemd-enabled operating system will feel quite similar inside a container with it.

Pass Docker run command through dockerfile

I am trying to run docker inside my container. I saw in some of the article that I need to pass --privileged=true to make this possible.
But for some reason, I do not have the option to pass this parameter while running.. because it is been taken care by some automation which I do not have access.
So, I was wondering if its possible to pass above option in Dockerfile, so that I do not have the pass this as param.
Right now this is the content of my dockerfile.
FROM my-repo/jenkinsci/jnlp-slave:2.62
USER root
#RUN --privileged=true this doesnt work for obvious reasons
MAINTAINER RD_TOOLS "abc#example.com"
RUN apt-get update
RUN apt-get remove docker docker-engine docker.io || echo "No worries"
RUN apt-get --assume-yes install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common curl
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN apt-key fingerprint 0EBFCD88
RUN cat /etc/*-release
RUN apt-get --assume-yes install docker.io
RUN docker --version
RUN service docker start
WIthout passing privilaged= true param, it seems I cant run the docker inside docker.
Any help in this regard is highly appreciated.
You can't force a container to run as privileged from within the Dockerfile.
As a general rule, you can't run Docker inside a Docker container; the more typical setup is to share the host's Docker socket. There's an official Docker image that attempts this at https://hub.docker.com/_/docker/ with some fairly prominent suggestions to not actually use it.

AEM 6.0 on docker - Dbus connection error

I'm trying to dockerize an AEM 6.0 installation, and this is the Dockerfile for my author.
from centos:latest
COPY aem6.0-author-p4502.jar /AEM/aem/author/aem6.0-author-p4502.jar
COPY license.properties /AEM/aem/author/license.properties
RUN yum install dnsmasq -y
RUN systemctl enable dnsmasq
RUN yum install initscripts -y
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*;\
rm -f /lib/systemd/system/sockets.target.wants/*initctl*;\
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
WORKDIR /AEM/aem/author
RUN yum install wget -y
RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"
RUN yum localinstall jdk-8u151-linux-x64.rpm -y
RUN java -XX:MaxPermSize=256m -Xmx512M -jar aem6.0-author-p4502.jar -unpack
COPY aem6 /etc/init.d/aem6
RUN chkconfig --add aem6
RUN yum -y install initscripts && yum update -y & yum clean all
RUN chown -R $USER:$(id -G) /etc/init.d
RUN chmod 777 -R /etc/init.d/aem6
RUN systemctl enable aem6.service
RUN service aem6 start
VOLUME /sys/fs/cgroup
CMD /usr/sbin/init
The build fails on starting the service, with the error - failed to get Dbus connection error. I haven't been able to figure out how to fix it.
I've tried these
- https://github.com/CentOS/sig-cloud-instance-images/issues/45
- https://hub.docker.com/_/centos/
Here, the problem is that you're trying to start the aem service during the "build" phase, with this statement:
RUN service aem6 start
This is problematic for a number of reasons. First, you're building an image. Starting a service at this stage is pointless...when the build process completes, nothing is running. An image is just a collection of files. You don't have any processes until you boot a container, at which point your CMD and ENTRYPOINT influence what is running.
Another problem is that at this stage, nothing else is running inside the container environment. The service command in this case is trying to communicate with systemd using the dbus api, but neither of those services are running.
There is a third slightly more subtle problem: the solution you've chosen relies on systemd, the standard CentOS process manager, and as far as things go you've configured things correctly (by both enabling the service with systemctl enable ... and by starting /sbin/init in your CMD statement). However, running systemd in a container can be tricky, although it is possible. In the past, systemd required the container to run with the --privileged flag; I'm not sure if this is necessary any more.
If you weren't running multiple processes (dnsmasq and aem) inside the container, the simplest solution would be to start the aem service directly, rather than relying on a process manager. This would reduce your Dockerfile to something like:
FROM centos:latest
COPY aem6.0-author-p4502.jar /AEM/aem/author/aem6.0-author-p4502.jar
COPY license.properties /AEM/aem/author/license.properties
WORKDIR /AEM/aem/author
RUN yum install wget -y
RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"
RUN yum localinstall jdk-8u151-linux-x64.rpm -y
RUN java -XX:MaxPermSize=256m -Xmx512M -jar aem6.0-author-p4502.jar -unpack
CMD some commandline to start aem
If you actually require dnsmasq, you could run it in a second container (potentially sharing the same network environment as the aem container).

Dockerfile with LAMP running (Ubuntu)

I'm trying to create a Docker (LAMP) image with the following
Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
apache2 \
mysql-server \
php7.0 \
php7.0-bcmath \
php7.0-mcrypt
COPY start-script.sh /root/
RUN chmod +x /root/start-script.sh && /root/start-script.sh
start-script.sh:
#!/bin/bash
service mysql start
a2enmod rewrite
service apache2 start
I build it with:
docker build -t resting/ubuntu .
Then run it with:
docker run -it -p 8000:80 -p 5000:3306 -v $(pwd)/html:/var/www/html resting/ubuntu bash
The problem is, the MYSQL and Apache2 service are not started.
If I run /root/start-script.sh manually in the container, port 80 maps fine to port 8000, but I couldn't connect to MYSQL with 127.0.0.1:5000.
How can I ensure that the services are running when I spin up a container with the image, and map MYSQL out to my host machine?
You need to change the execution of the script to a CMD instruction.
FROM ubuntu:latest
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
apache2 \
mysql-server \
php7.0 \
php7.0-bcmath \
php7.0-mcrypt
COPY start-script.sh /root/
RUN chmod +x /root/start-script.sh
CMD /root/start-script.sh
Althought this works, this is not the right way to manage containers. You should have one container for your Apache2 and another one for MySQL.
Take a look to this article that build a LAMP stack using Docker-Compose: https://www.kinamo.be/en/support/faq/setting-up-a-development-environment-with-docker-compose
you need multiple images - one for each service or app.
A Docker container is not a virtual machine in which you run an entire stack. It is a virtual application, running one primary process.
If you need php, apache and mysql, then you will need 3 docker containers. one for your php app, one for apache and one for mysql.

Resources