How to run xeyes in Docker Ubuntu? [closed] - docker

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 days ago.
Improve this question
My goal is to do X11 forwarding. The first step is to make sure xeyes works. However, when I tried to run xeyes, it throws error
The command
(base) jason#Jasons-Mac-mini darkmark-docker-web % docker run -it --rm --net=host -e DISPLAY -v $HOME/.Xauthority:/root/.Xauthority my-xeyes
Error: Can't open display: /private/tmp/com.apple.launchd.abcde/org.xquartz:0
FROM debian:latest
RUN apt-get update && apt-get install -y \
x11-apps \
&& rm -rf /usr/share/doc/* && \
rm -rf /usr/share/info/* && \
rm -rf /tmp/* && \
rm -rf /var/tmp/*
RUN useradd -ms /bin/bash user
USER user
CMD xeyes

I can do it two ways. I use alpine but it is still X11 provided via XQuartz.
Both methods tested with:
macOS Ventura 13.1
Docker Desktop 14.6.2
XQuartz 2.8.2
This one needs socat:
# Install socat and XQuartz
brew install socat
brew cask install xquartz
socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\"$DISPLAY\" &
# Ensure XQuartz and docker are started - you can do this graphically too
open -a XQuartz
open -a docker
# Run alpine latest
docker run -it -e DISPLAY=host.docker.internal:0 alpine:latest
# Inside docker container. Install and run xeyes
/ # apk update && apk add xeyes && xeyes
This one doesn't need socat:
# Start XQuartz - could equally do it with Spotlight Search
open -a XQuartz
Goto XQuartz->Settings->Security and check "Authenticate Connections" and "Allow connections from network clients"
Stop and restart XQuartz - wait till it is running
xhost + $(hostname)
MarkBookPro.local being added to access control list
# Start alpine image
docker run --rm -e DISPLAY=host.docker.internal:0 -it alpine
# Inside docker container. Install and run xeyes
/ # apk update && apk add xeyes && xeyes

Related

Issues running a docker container with a GUI application with host network mode

I am trying to create a docker container with a ROS install and a simulation setup to streamline the process for people joining the project later.
When I run rviz this way, I get the rviz window showing up on my host just fine, as expected following this ros tutorial:
sudo docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" xrf-robot-repo rviz
ROS Master isn't running so this output is expected
Now my issue is that I want to run my container in the host network mode (--net=host) the rviz dialogue does not show up anymore. Here's what I run this:
sudo docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" --net=host xrf-robot-repo rviz I don't think these errors have anything to do with the gui window not showing up
I have no idea why the GUI window does not show up. I was hoping for some guidance here. I would guess this would have something to do with the different network mode affecting how the x11 forwarding may work, but I am not sure how to further look into this.
Here's what my Dockerfile looks like that I used to build the image in case it may be helpful:
FROM osrf/ros:melodic-desktop-full
SHELL ["/bin/bash", "-c"]
RUN apt-get update && apt-get install -y --no-install-recommends \
git apt-utils python3-catkin-tools\
&& rm -rf /var/lib/apt/lists/*
RUN source ./ros_entrypoint.sh && git clone https://github.com/RumailM/xrf-robot-stack
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && catkin_init_workspace
RUN apt-get update
ARG DEBIAN_FRONTEND=noninteractive
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && rosdep install
--from-paths src --ignore-src -r -y
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && catkin build
The reason I need to use network mode is that I would like the host to be able to communicate with the rosmaster node and any other nodes within the container. I also do not know before hand what nodes may exist outside the container and at what ports they may communicate on, so the obvious answer of forwarding only the ports that I will use will not work (the ports may change at runtime). Forwarding large ranges of ports does not seem viable either.
Any guidance is appreciated!
You should add --privileged option to the docker run command. This is related to this issue Qt applications and network host .
I ran into a similar issue and was able to resolve by following this question on ROS Answers.
Also to be able to run rviz properly, I needed the docker container to access my graphics card drivers(NVIDIA) in my case. So I needed to add -e NVIDIA_VISIBLE_DEVICES=all and -e NVIDIA_DRIVER_CAPABILITIES=all and --runtime nvidia to the docker run command.
This was the final run command
docker run -it --rm \
--privileged \
--network host \
-e NVIDIA_VISIBLE_DEVICES=all \
-e NVIDIA_DRIVER_CAPABILITIES=all \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--name "$CONTAINER_NAME" \
--runtime nvidia \
xrf-robot-repo \
/bin/bash
Hope this helps

Docker in Docker | Github actions - Self Hosted Runner

Am trying to create a self-hosted runner for Github actions on Kubernetes. As a first step was trying with the docker file as below:
FROM ubuntu:18.04
# set the github runner version
ARG RUNNER_VERSION="2.283.1"
# update the base packages and add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN useradd -r -g docker nonroot
# install python and the packages the your code depends on along with jq so we can parse JSON
# add additional packages as necessary
RUN apt-get install -y curl jq build-essential libssl-dev apt-transport-https ca-certificates curl software-properties-common
# install docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" \
&& apt update \
&& apt-cache policy docker-ce \
&& apt install docker-ce -y
ENV TINI_VERSION v0.19.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
RUN usermod -aG docker nonroot
USER nonroot
# set the entrypoint to the start.sh script
ENTRYPOINT ["/tini", "--"]
CMD ["/bin/bash"]
After doing a build, I run the container with the below command:
docker run -v /var/run/docker.sock:/var/run/docker.sock -it srunner
When i try to pull image, I get the below error:
nonroot#0be0cdccb29b:/$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
nonroot#0be0cdccb29b:/$
Please advise if there is a possible way to run docker as non-root inside a docker container.
Instead of using sockets, there is also a way to connect to outer docker, from docker in container, over TCP.
Linux example:
Run ifconfig, it will print the docker's network interface that is created when you install docker on a host node. Its usually named docker0, note down the IP address of this interface.
Now, modify the /etc/docker/daemon.json and add thistcp://IP:2375 to the hosts section. Restart docker service.
Run containers with extra option: --add-host=host.docker.internal:host-gateway
Inside any such container, the address tcp://host.docker.internal:2375 now points to the outside docker engine.
Try adding your username to the docker group as suggested here.
Additionally, you should check your kernel compatibility.

How build dockerfile with few needed ports

I want to learn Docker so I decided to create all files (Dockerfile, docker-compose) step by step by my own.
I need Centos 8 with httpd and webmin. I prepared Dockerfile with httpd and it works very well but when I am trying add RUN with install webmin cmd I can’t figure how open webmin panel. Port 10000 doesn’t work or it works but I don’t know how to get there.
Also, If I need Centos 8 with phpmyadmin, webmin, apache etc. should I create docker-compose with Centos 8 and phpmyadmin separately? Or another way?
My Dockerfile
FROM centos:8
RUN yum update -y && yum install -y \
httpd \
httpd-tools \
wget \
perl \
perl-Net-SSLeay \
openssl perl-Encode-Detect
RUN wget https://prdownloads.sourceforge.net/webadmin/webmin-1.930-1.noarch.rpm \
&& rpm -U webmin-1.930-1.noarch.rpm
EXPOSE 80
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]

Running systemd in docker container causes host crash

I'm trying to create a systemd based docker container, but when I try running the built container my system crashes. I think running init in the container might be causing the conflict, and is somehow conflicting with systemd on my host.
When I try to run the docker container I am logged out of my account and briefly see what looks like my system going through a boot process. My host is running Arch Linux, with linux 4.20.7.
It is only when I attempt to "boot" the container by running systemd via /sbin/init, that the problem occurs.
docker run -it \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:rw \
--privileged 66304e3bc48
Dockerfile (adapted from solita/ubuntu-systemd):
FROM ubuntu:18.04
# Don't start any optional services.
RUN find /etc/systemd/system \
/lib/systemd/system \
-path '*.wants/*' \
-not -name '*journald*' \
-not -name '*systemd-tmpfiles*' \
-not -name '*systemd-user-sessions*' \
-exec rm \{} \;
RUN apt-get update && \
apt-get install --yes \
python sudo bash ca-certificates dbus systemd && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN systemctl set-default multi-user.target
RUN systemctl mask dev-hugepages.mount sys-fs-fuse-connections.mount
STOPSIGNAL SIGRTMIN+3
# Workaround for docker/docker#27202, technique based on comments from docker/docker#9212
CMD ["/bin/bash", "-c", "exec /sbin/init --log-target=journal 3>&1"]
I would expect the container to just boot up running systemd, and I'm not your what I might be doing wrong.
Docker do not want to include by default Systemd inside docker because it publish itself as an application container (which means one application per container). There is an other type of containers called system containers. the most known are OpenVZ , LXC/LXD and Systemd-nspawn. all those will run a full OS with systemd as if it were a virtual machine.
and using systemd inside docker is so dangerous when compared to running systemd inside LXD
there is even a new baby called Podman which is a clone of docker but it uses by default systemd inside when you install it or when you use an image which contain systemd already like those of ubuntu cloud images http://uec-images.ubuntu.com/releases/server/bionic/release/
so my advice is to test LXD and systemd-nspawn ; and keep an eye on Podman it solves what docker do not want to solve; read this to understand https://lwn.net/Articles/676831/
references:
https://coreos.com/rkt/docs/latest/rkt-vs-other-projects.html
https://podman.io/slides/2018_10_01_Replacing_Docker_With_Podman.pdf
https://containerjournal.com/features/system-containers-vs-application-containers-difference-matter
Runtimes And the Curse of the Privileged Container
https://brauner.github.io/2019/02/12/privileged-containers.html
I ended up using the paulfantom/ubuntu-molecule Docker image.
Currently it looks like they're just installing systemd, setting some environment variables, and using the systemd binary directly as the entry point. It seems to work without the issues I mentioned in the original post.
Dockerfile
FROM ubuntu:18.04
ENV container docker
ENV LC_ALL C
ENV DEBIAN_FRONTEND noninteractive
RUN sed -i 's/# deb/deb/g' /etc/apt/sources.list
# hadolint ignore=DL3008
RUN apt-get update \
&& apt-get install -y --no-install-recommends systemd python sudo bash iproute2 net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# hadolint ignore=SC2010,SC2086
RUN cd /lib/systemd/system/sysinit.target.wants/ \
&& ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
/etc/systemd/system/*.wants/* \
/lib/systemd/system/local-fs.target.wants/* \
/lib/systemd/system/sockets.target.wants/*udev* \
/lib/systemd/system/sockets.target.wants/*initctl* \
/lib/systemd/system/basic.target.wants/* \
/lib/systemd/system/anaconda.target.wants/* \
/lib/systemd/system/plymouth* \
/lib/systemd/system/systemd-update-utmp*
RUN systemctl set-default multi-user.target
ENV init /lib/systemd/systemd
VOLUME [ "/sys/fs/cgroup" ]
ENTRYPOINT ["/lib/systemd/systemd"]
To "match the host as close as possible" was the original goal of the docker-systemctl-replacement script. You can test drive scripts in a container that may be run later on a virtual machine. It allows to do some systemctl commands without an active systemd daemon.
It can also serve as an init daemon if you wish to. A systemd-enabled operating system will feel quite similar inside a container with it.

Run firefox in docker container on raspberry pi

I've followed this thread
to build a docker container to run on Raspberry pi. I've managed to run it on a normal centos, but on pi I always get this error (I always run in mobaxterm application on windows 10 because it has x11 support):
Unable to init server: broadway display type not supported 'machinename:0'
Error: cannot open display: machinename:0
I've tried to build it with 2 different dockerfile, the 1st:
FROM resin/rpi-raspbian:latest
# Make sure the package repository is up to date
RUN apt-get update && apt-get install -y firefox-esr
USER root
ENV HOME /root
CMD /usr/bin/firefox
And tried to run it with this script:
#!/usr/bin/env bash
CONTAINER=firefox_wo_vnc
COMMAND=/bin/bash
DISPLAY="machinename:0"
USER=$(whoami)
docker run -ti --rm \
-e DISPLAY \
-v "/c/Users/$USER:/home/$USER:rw" \
$CONTAINER
$COMMAND
and I got the error :(
The 2nd that I've tried:
# Firefox over VNC
#
# VERSION 0.1
# DOCKER-VERSION 0.2
FROM resin/rpi-raspbian:latest
RUN apt-get update
# Install vnc, xvfb in order to create a 'fake' display and firefox
RUN apt-get install -y x11vnc xvfb firefox-esr
RUN mkdir ~/.vnc
# Setup a password
RUN x11vnc -storepasswd 1234 ~/.vnc/passwd
# Autostart firefox (might not be the best way to do it, but it does the trick)
RUN bash -c 'echo "firefox" >> /.bashrc'
And tried to run:
#!/usr/bin/env bash
CONTAINER=firefoxvnc
COMMAND=/bin/bash
DISPLAY="machinename:0"
USER=$(whoami)
docker run \
-it \
--rm \
--user=$USER \
--workdir="/home/$USER" \
-v "/c/Users/$USER:/home/$USER:rw" \
-e DISPLAY \
$CONTAINER \
$COMMAND
This way I can login via VNC but firefox is not running and I can't see actually anything just an empty desktop.
I've read many thread, I've installed xorg, openbox, x11 apps ...

Resources