From this article, it states that windows 11 natively supports running of X11 and wayland applications on wsl.
I tried to do the same through a docker container, settinng the environment variable DISPLAY="host.docker.internal:0.0", and running a gui application (like gedit). But instead I got this error:
Unable to init server: Could not connect: Connection refused
Gtk-WARNING **: 17:05:50.416: cannot open display: host.docker.internal:0.0
I stumbled upon your question while attempting the same thing as you are and acctually got it to work with aid of this blog post on Microsoft. I use a minimal Dockerfile based on Ubuntu and installs gedit:
FROM ubuntu:22.04
RUN apt update -y && apt install -y gedit
CMD ["gedit"]
Create the image the usual way, e.g. docker build . -t guitest:1.0
On the WSL command line, start it like this:
docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix \
-v /mnt/wslg:/mnt/wslg \
-e DISPLAY \
-e WAYLAND_DISPLAY \
-e XDG_RUNTIME_DIR \
-e PULSE_SERVER \
guitest:1.0
I hope this is to good use for you as well.
This answer is heavily based on what chrillof has said. Thanks for the excellent start!
The critical things here for Docker Desktop users on Windows with WSL2 are that:
The container host (i.e. docker-desktop-data WSL2 distribution) does not have a /tmp/.X11-unix itself. This folder is actually found in the /mnt/host/wslg/.X11-unix folder on the docker-desktop distribution which translates to /run/desktop/mnt/host/wslg/.X11-unix when running containers.
There are no baked-in environment variables to assist you, so you need to specify the environment variables explicitly with these folders in mind.
I found this GitHub issue where someone had to manually set environment variables which allowed me to connect the dots between what others experience directly on WSL2 and chrillof's solution
Therefore, modifying chrillof's solution using PowerShell from the host, it looks more like:
docker run -it -v /run/desktop/mnt/host/wslg/.X11-unix:/tmp/.X11-unix `
-v /run/desktop/mnt/host/wslg:/mnt/wslg `
-e DISPLAY=:0 `
-e WAYLAND_DISPLAY=wayland-0 `
-e XDG_RUNTIME_DIR=/mnt/wslg/runtime-dir `
-e PULSE_SERVER=/mnt/wslg/PulseServer `
guitest:1.0
On my computer, it looks like
this (demo of WSLg X11)
To be clear, I have not checked if audio is functional or not, but this does allow you to avoid the installation of another X11 server if you already have WSL2 installed.
Related
While trying to follow these SO instructions for getting a simple Xeyes application running from within a Docker container on a Mac (10.15.5) using XQuartz, this is what I get:
$ docker run -it -e DISPLAY="${IP}:0" -v /tmp/.X11-unix:/tmp/.X11-unix so_xeyes
/work # xeyes
Error: Can't open display: 192.168.1.9:0
Here are the steps to reproduce:
$ brew install --cask xquartz
Dockerfile:
# Base Image
FROM alpine:latest
RUN apk update && \
apk add --no-cache xeyes
# Set a working directory
WORKDIR /work
# Start a shell by default
CMD ["ash"]
Build image with:
$ docker build -t so_xeyes .
And run the Docker Container/xeyes with this:
# Set your Mac IP address
IP=$(/usr/sbin/ipconfig getifaddr en0)
echo $IP
192.168.1.9
# Allow connections from Mac to XQuartz
/opt/X11/bin/xhost + "$IP"
192.168.1.9 being added to access control list
# Run container
docker run -it -e DISPLAY="${IP}:0" -v /tmp/.X11-unix:/tmp/.X11-unix so_xeyes
When inside the container, type:
xeyes
BUT, I get the following error: Error: Can't open display: 192.168.1.9:0
Does anyone have an idea how I can resolve this or to investigate further?
#MarkSetchell gave me a hint with suggesting I needed to modify the XQuartz Preferences > Security...
But, even after selecting "Allow connections from network clients", it still didn't work.
Then I found a Gist that gave me a little more information here because someone commented that after they made the change, they needed to reboot their Mac (again): https://gist.github.com/cschiewek/246a244ba23da8b9f0e7b11a68bf3285
So, after I made the change AND rebooted my Mac, it worked!
Thanks for guiding me to the final answer!
ALSO NOTE: You do NOT need to volume mount the .X11 directory for this to work:
docker run -it -e DISPLAY="${IP}:0" so_xeyes
By default, X11 does not listen over TCP/IP. You can enable that if you want in Settings, but I don't think it's necessary here. Docker should be able to route traffic to the unix domain socket setup by launchd for DISPLAY (eg: /private/tmp/com.apple.launchd.jTIfZplv7A/org.xquartz:0).
If that doesn't work, you should reach out to Docker to add support for that as it's much preferred to using TCP for X11 traffic.
I am fairly new to Docker (learned about it yesterday, found it interesting) and have absolutely no skill with it so please try to make your answer as noob friendly as possible.
I ran a ubuntu image and tried to install and run wireshark in it (GUI based packet catcher) but on running I got the following error :
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
dbus[2537]: The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details.
Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection.
D-Bus not built with -rdynamic so unable to print a backtrace
Aborted (core dumped)
I made my docker container with :
sudo docker run --name ubuntu -v /home/anmol/Projects/Docker/Ubuntu/:/home -it --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --env="DISPLAY" --net=host ubuntu
Additionally, I have tried :
xhost +local:docker
which didn't work so I tried :
xhost +
which also didn't work and I kept getting the same error.
I have a feeling I am supposed to install some x11 package inside my container but I don't know which one or if that is the right thing to do.
After writing xhost +x if you add --privileged in your docker command, it should work.
sudo docker run -it \
--privileged \
-e DISPLAY=$DISPLAY \
-v $HOME/.Xauthority:/root/.Xauthority \
<imageId>
However, it is not secure to use xhost +x and --privileged. They are not recommended to be used in a production environment, because they expose the kernel and the hardware resources of the host.
Faced the issue and followed #Doğuş and #Casey Jones as references, which almost helped.
Still faced the issue of not setting the host to that machine.
So my updated command is as follows
docker run -it --privileged --net=host-e DISPLAY=$DISPLAY -v
$HOME/.Xauthority:/root/.Xauthority <IMAGE_ID> /bin/bash
where IMAGE_ID is your docker image id.
Okay so for several projects I need to access my private repositories, so I'd like to forward the host's SSH Agent to the container to allow retrieving from these private repositories. Eventually I would like to implement this in docker-compose.
I've found a lot of answers and solutions pointing to something like this:
docker run --rm -t -i \
-v $SSH_AUTH_SOCK:/ssh-agent \
-e SSH_AUTH_SOCK=/ssh-agent \
alpine:3.6 sh
But when I run ssh-add -l inside there (after making sure openssh is installed)
I get the following error:
Error connecting to agent: Connection refused
Also tried this within my docker compose setup but it doesn't seem to work like it should.
Due to most posts and solutions being several years old I hope someone can help me with accurate up-to-date info.
According to this issue you can forward your macOS ssh-agent to your docker container by adding -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock -e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" options to your docker run command, e.g.
docker run --rm -it \
-v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock \
-e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" \
docker_image
You can mount files, but not sockets - sharing sockets between MacOS through the hypervisor into docker containers is something that isn't supported yet. Various bug reports and acknowledgements exist, and some day it should work.
So in the meantime, you need to have something that forwards network traffic between the container and MacOS. One of the solutions that people point out is docker-ssh-agent-forward.
A different solution would be to run ssh-agent in a container and to access that from MacOS and the other containers - it's probably a bit more invasive but works. A solution is docker-ssh-agent.
Add keys to the launchd managed ssh-agent:
SSH_AUTH_SOCK=`launchctl getenv SSH_AUTH_SOCK` ssh-add
Forward the launchd managed ssh-agent to docker container:
docker run --rm -it \
-v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock:ro \
-e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" \
image ssh hosts
The mount option and SSH_AUTH_SOCK value in container are all magic constants, do not change them.
launchctl getenv SSH_AUTH_SOCK may output empty string on iTerm2 3.2+ due to the bug. The work around is one of:
A portable way for newer OS (>=10.13 i.e. macOS High Sierra) is launchctl asuser $UID launchctl getenv SSH_AUTH_SOCK, or
For older OS, in iTerm2 > Prefs > Advanced, turn on "Enable multi-server daemon", or
For older OS, in iTerm2 > Prefs > Advanced, turn off "Allow sessions to survive logging out and back in".
NOTE: if the launchctl problem cannot work round, there is another way to forwarding ssh agent via stdio tunnel.
We recently have a lot of problem deploying the Linux version of our app for the client (updated libraries, missing libraries, install path), and we are looking to use Docker for deployment.
Our app as a UI, so we naturally map that using
-e DISPLAY:$DISPLAY -v /tmp/X11-unix:/tmp/X11-unix
and we can actually see the UI popping up.
But when it's time to open a file, the problem start there. We want to only browse the host system and save any output file on the host (output directory is determined by the location of the opened file).
Which strategy would you suggest for this?
We want the client to not see the difference between the app running locally or inside Docker. We are working on a launch script so it looks like the client would still be double-clicking on it to start the app. We can add all the configuration we need in there for the docker run command.
After recommendations by #CFrei and #Robert here's a solution that seems to work well:
docker run \
-ti \
-u $(id -u):$(id -g) \
-v /home/$(id -un):/home/$(id -un) \
-v /etc/passwd:/etc/passwd \
-w $(pwd) \
MyDockerImage
And now, every file created inside that container is perfectly located in the right directory on the host with the ownership by the user.
And from inside the container, it really looks like the host, which will be very useful for the client.
Thanks again for your help guys!
And I hope this can help someone else.
As you may know, the container hast its own filesystem that is provided by the image where it runs on top.
You can map a host's directory or file to a path inside the container, where your program expects it to be. This is known as docker volume. You're already doing that with the X11 socket communication (the -v flag).
For instance, for a file:
docker run -v /absolute/path/in/the/host/some.file:/path/inside/container/some.file
For a directory:
docker run -v /absolute/path/in/the/host/some/dir:/path/inside/container/some/dir
You can provide as many -v flags as you might need.
Here you can find more useful information.
I'm trying to follow the instructions here to install google cloud sdk on my boot2docker environment without success.
Is the boot2docker is a limited linux? what is missing?
The error I get is:
(23) Failed writing body
Is the boot2docker is a limited linux?
Yes, boot2docker is based on the Tiny Core distro, and has the following limitations:
Only C:\Users or /Users is mounted
symlinks are not supported.
If gcloud needs to access to another path or uses symlink, that would fail.
In general, boot2docker is there to host a docker dameon, and allows you to declare the installation of programs in containers.
See for example blacklabelops/gcloud which includes the latest Google Cloud SDK along with all modules (as defined in its Dockerfile).
You would execute gcloud commands by running that container, not by installing gcloud directly on your boot2docker instance.
For instance:
docker run -it --rm \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount#developer.gserviceaccount.com" \
-e "CLOUDSDK_CORE_PROJECT=example-project" \
-e "CLOUDSDK_COMPUTE_ZONE=europe-west1-b" \
-e "CLOUDSDK_COMPUTE_REGION=europe-west1" \
blacklabelops/gcloud \
gcloud compute instances list