XQuartz: Can't Open Display Mac OS - docker

While trying to follow these SO instructions for getting a simple Xeyes application running from within a Docker container on a Mac (10.15.5) using XQuartz, this is what I get:
$ docker run -it -e DISPLAY="${IP}:0" -v /tmp/.X11-unix:/tmp/.X11-unix so_xeyes
/work # xeyes
Error: Can't open display: 192.168.1.9:0
Here are the steps to reproduce:
$ brew install --cask xquartz
Dockerfile:
# Base Image
FROM alpine:latest
RUN apk update && \
apk add --no-cache xeyes
# Set a working directory
WORKDIR /work
# Start a shell by default
CMD ["ash"]
Build image with:
$ docker build -t so_xeyes .
And run the Docker Container/xeyes with this:
# Set your Mac IP address
IP=$(/usr/sbin/ipconfig getifaddr en0)
echo $IP
192.168.1.9
# Allow connections from Mac to XQuartz
/opt/X11/bin/xhost + "$IP"
192.168.1.9 being added to access control list
# Run container
docker run -it -e DISPLAY="${IP}:0" -v /tmp/.X11-unix:/tmp/.X11-unix so_xeyes
When inside the container, type:
xeyes
BUT, I get the following error: Error: Can't open display: 192.168.1.9:0
Does anyone have an idea how I can resolve this or to investigate further?

#MarkSetchell gave me a hint with suggesting I needed to modify the XQuartz Preferences > Security...
But, even after selecting "Allow connections from network clients", it still didn't work.
Then I found a Gist that gave me a little more information here because someone commented that after they made the change, they needed to reboot their Mac (again): https://gist.github.com/cschiewek/246a244ba23da8b9f0e7b11a68bf3285
So, after I made the change AND rebooted my Mac, it worked!
Thanks for guiding me to the final answer!
ALSO NOTE: You do NOT need to volume mount the .X11 directory for this to work:
docker run -it -e DISPLAY="${IP}:0" so_xeyes

By default, X11 does not listen over TCP/IP. You can enable that if you want in Settings, but I don't think it's necessary here. Docker should be able to route traffic to the unix domain socket setup by launchd for DISPLAY (eg: /private/tmp/com.apple.launchd.jTIfZplv7A/org.xquartz:0).
If that doesn't work, you should reach out to Docker to add support for that as it's much preferred to using TCP for X11 traffic.

Related

How to show GUI apps from docker desktop container on windows 11

From this article, it states that windows 11 natively supports running of X11 and wayland applications on wsl.
I tried to do the same through a docker container, settinng the environment variable DISPLAY="host.docker.internal:0.0", and running a gui application (like gedit). But instead I got this error:
Unable to init server: Could not connect: Connection refused
Gtk-WARNING **: 17:05:50.416: cannot open display: host.docker.internal:0.0
I stumbled upon your question while attempting the same thing as you are and acctually got it to work with aid of this blog post on Microsoft. I use a minimal Dockerfile based on Ubuntu and installs gedit:
FROM ubuntu:22.04
RUN apt update -y && apt install -y gedit
CMD ["gedit"]
Create the image the usual way, e.g. docker build . -t guitest:1.0
On the WSL command line, start it like this:
docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix \
-v /mnt/wslg:/mnt/wslg \
-e DISPLAY \
-e WAYLAND_DISPLAY \
-e XDG_RUNTIME_DIR \
-e PULSE_SERVER \
guitest:1.0
I hope this is to good use for you as well.
This answer is heavily based on what chrillof has said. Thanks for the excellent start!
The critical things here for Docker Desktop users on Windows with WSL2 are that:
The container host (i.e. docker-desktop-data WSL2 distribution) does not have a /tmp/.X11-unix itself. This folder is actually found in the /mnt/host/wslg/.X11-unix folder on the docker-desktop distribution which translates to /run/desktop/mnt/host/wslg/.X11-unix when running containers.
There are no baked-in environment variables to assist you, so you need to specify the environment variables explicitly with these folders in mind.
I found this GitHub issue where someone had to manually set environment variables which allowed me to connect the dots between what others experience directly on WSL2 and chrillof's solution
Therefore, modifying chrillof's solution using PowerShell from the host, it looks more like:
docker run -it -v /run/desktop/mnt/host/wslg/.X11-unix:/tmp/.X11-unix `
-v /run/desktop/mnt/host/wslg:/mnt/wslg `
-e DISPLAY=:0 `
-e WAYLAND_DISPLAY=wayland-0 `
-e XDG_RUNTIME_DIR=/mnt/wslg/runtime-dir `
-e PULSE_SERVER=/mnt/wslg/PulseServer `
guitest:1.0
On my computer, it looks like
this (demo of WSLg X11)
To be clear, I have not checked if audio is functional or not, but this does allow you to avoid the installation of another X11 server if you already have WSL2 installed.

Issues with docker build command and github

I'm trying to run a dev env as docker machine,
I created the following docker file
FROM rails:4.2
MAINTAINER Chen Kinnrot <kinnrot#gmail.com>
RUN mkdir -p /var/app
COPY Gemfile /var/app/Gemfile
WORKDIR /var/app
RUN bundle install
CMD rails s -b 0.0.0.0
When Running docker build -t dev .
I get the follwing message
fatal: unable to connect to github.com:
github.com: Name or service not known
Why is that, and how can I solve this annoying issue?
This is a known issue with VirtualBox / boot2docker; when switching networks, boot2docker sometimes looses its DNS information. See these issues; https://github.com/boot2docker/boot2docker/issues/776, https://github.com/docker/machine/issues/1857
You can either try to restart the machine;
docker-machine stop default
docker-machine start default
Or set the right name server to the virtual machine;
docker-machine ssh default
echo "nameserver 8.8.8.8" > /etc/resolv.conf

Running 2 services

I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.

Docker toolbox volumes on windows doesn't refresh changes on container

I am starting with docker on windows and I am trying to use volumes for manage data in containers.
My host environment is a:
Windows 8.1
Docker Toolbox 1.8.
Virtual Box 5.0.6
I've created a ngnix image using the following Dockerfile.
Dockerfile
FROM centos:6.6
MAINTAINER afym
ENV WEBPORT 80
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install nginx; yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
VOLUME /usr/share/nginx/html
EXPOSE $WEBPORT
CMD [ "/usr/sbin/nginx" ]
I've created a ngnix container using the following command.
docker run -d --name nge -v //c/Users/src:/usr/share/nginx/html -p 8082:80 ng1
b738fef9cc4d135416a8cca4caf869acf944319b7c3c61129b11f37f9d891598
Then I go to my browser and I can see the web page:
However when I make a change on my index.html file it doesn't refresh on browser
Editing my file
On my browser (ctrl + f5)
I went to the VirtualBox machine to check if my shared directories options is ok.
Then I inspect my nge container with the following command.
docker inspect ng1
Docker inspect
What is happening with volumes? Why I can not see my changes?
After a couple of days I could find the solution.
Firstable docker on windows even on MAC uses a boot2docker instance on VirtualBox.
Diagrams
On MAC
On Windows
Next, the official docker's documentation says :
docker volume
Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory
However, after find a solution I decided to change the default c/Users to another path just for keep order. With this in mind I did the following steps:
Define your own workspace directory. In my case is /e/arquitectura (optional. If you want you can use the default path which is /c/Users)
Verify the configuration on the Virtual Machine (In default machine go to > Configuration > Share directories)
Join to the default machine and mount the directory using the alias name
sudo mount -t vboxsf alias-name-virtualbox some-path-in-boot2docker
# In my case (boot2docker instance)
$ cd
$ mkdir arquitectura
$ sudo mount -t vboxsf arquitectura /arquitectura
Finally create a new container or restart an existing one if you haven't changed the c/user/ path
# In my case (docker client)
$ docker run -d --name nge -v //arquitectura/src:/usr/share/nginx/html -p 8081:80 ng1
Now it works.

Running Chromium inside Docker - Gtk: cannot open display: :0

When I try to run chromium inside a docker container I see the following error: Gtk: cannot open display: :0
Dockerfile: (based on https://registry.hub.docker.com/u/jess/chromium/dockerfile)
FROM debian:jessie
# Install Chromium
RUN sed -i.bak 's/jessie main/jessie main contrib non-free/g' /etc/apt/sources.list && \
apt-get update && apt-get install -y \
chromium \
chromium-l10n \
libcanberra-gtk-module \
libexif-dev \
libpango1.0-0 \
libv4l-0 \
pepperflashplugin-nonfree \
--no-install-recommends && \
mkdir -p /etc/chromium.d/
# Autorun x11vnc
CMD ["/usr/bin/chromium", "--no-sandbox", "--user-data-dir=/data"]
build and run:
docker build -t chromium
docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --privileged chromium
and the error:
[1:1:0202/085603:ERROR:browser_main_loop.cc(164)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.
No protocol specified
[1:1:0202/085603:ERROR:browser_main_loop.cc(210)] Gtk: cannot open display: :0
i don't know much about chromium, but, I did work with X way back when :-) When you tell an X client to connect to :0, what you are saying is connect to port 6000 (or whatever your X server runs on) + 0, or port 6000 in this case. In fact, DISPLAY is IP:PORT (with the +6000 as mentioned above). The X server is running on your host, so, if you set:
DISPLAY=your_host_ip:0
that might work. However, X servers did not allow connections from just any old client, so, you will need to open up your X server. on your host, run
xhost +
before running the docker container. All of this is assuming you can run chromium on your host (that is, an X server exists on your host).
Try
xhost local:root
This solve mine, I am on Debian Jessie. https://github.com/jfrazelle/dockerfiles/issues/4
Adding as reference (see real answer from greg)
In your Linux host add
xhost +"local:docker#"
In Docker image add
RUN apt-get update
RUN apt-get install -qqy x11-apps
and then run
sudo docker run \
--rm \ # delete container when bash exits
-it \ # connect TTY
--privileged \
--env DISPLAY=unix$DISPLAY \ # export DISPLAY env variable for X server
-v $XAUTH:/root/.Xauthority \ # provide authority information to X server
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-v /home/alex/coding:/coding \
alexcpn/nvidia-cuda-grpc:1.0 bash
Inside the container -check a sample command
xclock
For Ubuntu 20.04, changing DISPLAY=:0 to DISPLAY=$DISPLAY fixed it for me, my local env had $DISPLAY set to :1:
docker run --rm -ti --net=host -e DISPLAY=$DISPLAY fr3nd/xeyes
So, I also had a requirement to open a graphical application within my docker container. So, these are the steps that worked for my environment.(Docker version: 19.03.12 , Container OS: Ubuntu 18.04).
Before running the container, make the host's X server accept connections from any client by running this command: xhost +. This is a very non-restrictive way to connect to the host's X server, and you can restrict as per the other answers given. Then, run the container with the --network=host option (E.g: docker run --network=host <my image name>). Once container is up, log in to its shell, and launch your app with DISPLAY=:0 (E.g: DISPLAY=:0 <my graphical app>)
I got it to work on a Windows host but not on my Linux Mint (Ubuntu) host. The reason was that I was using Docker Desktop on Linux, which uses a VM under the hood.
Solution: Shut down Docker Desktop and install Docker Engine. Other than that, also do as in the other answers.
What is needed is an alias for your docker-hostname to the outer hostname. When using a DISPLAY starting with just a : it means localhost. Basically, your hostname inside docker needs to resolve via /etc/hosts to the same name as the outer host - because that is the name that is stored in .Xauthority
I found this script to autoget ip of your pc:
FOR /F "tokens=4 delims= " %%i in ('route print ^| find " 0.0.0.0"') do set localIp=%%i
Create a bat file and put in this bat this:
FOR /F "tokens=4 delims= " %%i in ('route print ^| find " 0.0.0.0"') do set
localIp=%%i
docker run -ti -v /tmp/.X11-unix -v /tmp/.docker.xauth -e
XAUTHORITY=/tmp/.docker.xauth --net=host -e DISPLAY=%localIp%:0.0 your-container

Resources