I am following a basic tutorial in which we run abiword (some gui) from a docker container. For some reason, the container cannot find my display.
I am running on Ubuntu 20.04, on a x86_64 machine.
My Dockerfile:
FROM ubuntu
RUN apt update && apt install -y abiword
CMD abiword
My build command:
docker build -t abiword .
Before my run command, I add docker to my authorized xhosts:
mylinux#mylinux:$ xhost +local:docker
non-network local connections being added to access control list
mylinux#mylinux:$ xhost
access control enabled, only authorized clients can connect
LOCAL:
SI:localuser:lu20
I also tried xhost + to disable access control altogether, but no luck.
Run commands I've tried:
docker run -e DISPLAY=$(hostname -I | awk '{print $1}')$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix abiword
docker run -e DISPLAY=unix$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix abiword
docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix abiword
The volume I'm mounting does exist:
mylinux#mylinux:$ ls /tmp/.X11-unix/
X0 X1
In all cases, I get the same output:
** (abiword:7): WARNING **: 13:46:51.920: clutter failed 0, get a life.
No DISPLAY: this may not be what you want.
Any help is appreciated.
Related
I have 2 users on my ubuntu: personal and work. I created a docker image to run firefox in a container. To make things simple I added an alias in my .bash_aliases file to run it by typing "firefox" in terminal like so:
docker run --rm -d --name firefox \
-v $XDG_RUNTIME_DIR/pulse:$XDG_RUNTIME_DIR/pulse \
-e PULSE_SERVER=$XDG_RUNTIME_DIR/pulse/native \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
--network host \
shallowduck/firefox:1.0
The problem is that firefox does not launch when I'm logged in as "work" user, only "personal".
When I run the command I get container id as output in terminal but nothing launches.
When I run docker ps, the container isn't there.
When I run docker ps -a, there is no trace that the container exited with an error or whatever.
Both users are part of the docker group.
I'm not sure what I'm missing. Any ideas would be appreciated.
I fixed this by running this command in terminal:
xhost +
This adds host names on the list of machines that can connect to X server. I forgot I had to run this command for each user.
So, as the title states, I'm a docker newbie.
I downloaded and installed the archlinux/base container which seems to work great so far. I've setup a few things, and installed some packages (including xeyes) and I now would like to launch xeyes. For that I found out the CONTAINER ID by running docker ps and then used that ID in my exec command which looks now like:
$ docker exec -it -e DISPLAY=$DISPLAY 4cae1ff56eb1 xeyes
Error: Can't open display: :0
Why does it still not work though? Also, how can I stop my running instance without losing its configured state? Previously I have exited the container and all my configuration and software installations were gone when I restarted it. That was not desired. How do I handle this correctly?
Concerning the X Display you need to share the xserver socket (note: docker can't bind mount a volume during an exec) and set the $DISPLAY (example Dockerfile):
FROM archlinux/base
RUN pacman -Syyu --noconfirm xorg-xeyes
ENTRYPOINT ["xeyes"]
Build the docker image: docker build --rm --network host -t so:57733715 .
Run the docker container: docker run --rm -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY so:57733715
Note: in case of No protocol specified errors you could disable host checking with xhost + but there is a warning to that (man xhost for additional information).
I' m a beginner in the Docker;
I have pulled a CentOS 7 image from Hub and ran it ;
I need to ssh in to the docker container(CentOS 7) from my host.
Got the docker container's IP using docker inspect container-id
I have installed the following using
initscripts
systemd.x86_64
systemd-libs.x86_64
open-ssh
firewalld
net-tools
when i tried to start the firewall to open the port for ssh(22)
[root#a6f3e3eb095c ~]# systemctl start firewall
Failed to get D-Bus connection: Operation not permitted
Also tried,
[root#a6f3e3eb095c ~]# /usr/lib/systemd/systemd --system &
[1] 353
[root#a6f3e3eb095c ~]# systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
Detected virtualization xen.
Detected architecture x86-64.
Welcome to CentOS Linux 7 (Core)!
Set hostname to <a6f3e3eb095c>.
Cannot determine cgroup we are running in: No such file or directory
Failed to allocate manager object: No such file or directory
[1]+ Exit 1 /usr/lib/systemd/systemd --system
How to start the firewall/ssh inside the docker container ?
inside docker container run following commands :
yum update -y glibc-common
yum install -y sudo passwd openssh-server openssh-clients tar screen crontabs strace telnet perl libpcap bc patch ntp dnsmasq unzip pax which
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install -y hiera lsyncd sshpass rng-tools
service sshd start;
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config;
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config;
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config;
sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Base.repo
mkdir -p /root/.ssh/;
rm -f /var/lib/rpm/.rpm.lock;
echo "StrictHostKeyChecking=no" > /root/.ssh/config;
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config
echo "root:password" | chpasswd
( or )
Simply you can pull docker image of centos with ssh in docker hub
https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=centos+ssh&starCount=0
https://hub.docker.com/r/kinogmt/centos-ssh/
https://hub.docker.com/r/jdeathe/centos-ssh/
You can avoid the "Failed to get D-Bus connection: Operation not permitted" / aka installing systemd inside a docker by using the https://github.com/gdraheim/docker-systemctl-replacement ... after that the docker-exec stuff should be all fine to do things inside a container.
If you really do need an ssh or sftp container, then you can use my Docker Image as a source image for your own or run it directly:
If using the official CentOS-7 Image and you require systemd, there are instructions on how to enable it under the section "Systemd integration".
However, based on the following:
I need to ssh in to the docker container(CentOS 7) from my host.
You can use docker exec to run commands in a running, (backgrounded), container so, for images that have bash available, you can access an interactive tty and run bash as follows from your host - where container can be either the name or id:
docker exec --tty --interactive <container> bash
OR
docker exec -ti <container> bash
Finally, it's unlikely to be necessary to install the firewall package in your image as the operator will decide what ports to publish from those which are exposed and you can make use of Docker Networking to only expose the necessary public facing services.
If you are using the Docker CLI, then you can get into the Docker container using the following command
docker exec -it containerId bash
I am not sure how to ssh into the docker container, but if you want to do basic operation inside the Docker container, you can make use of the above docker command.
I'm on Docker 17.06.0-ce and I'm attempting to mount a CIFS share in a container and only having some luck. If I use --privileged, it works, but that's not desirable for me. I've tried using --cap-add as well as suggested in this answer (even trying with --cap-add ALL with no success.
The same mount command works fine on the host system as well.
Here's a simple docker file I've tried playing with
FROM alpine:latest
RUN apk add --no-cache cifs-utils
Run with many different permutations, all with the same result below:
Works:
docker run --rm -it --privileged cifs-test /bin/sh
Doesn't Work:
docker run --rm -it --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH cifs-test /bin/sh
Doesn't Work:
docker run --rm -it --cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH --cap-add NET_ADMIN cifs-test /bin/sh
Doesn't Work:
docker run --rm -it --cap-add ALL cifs-test /bin/sh
And the command:
mkdir /test && mount.cifs //myserver/testpath /test -o user=auser,password=somepass,domain=mydomain
And the result from each run command above except the first:
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Has something changed in Docker that requires --privileged all the time for these types of mounts now? Or is there something else I'm missing?
I started using docker-volume-netshare so far with good success. There are some minor problems, like volumes created with docker volume create not being persistent, but nevertheless it looks like this volume driver is quite usable. One advantage is that special caps/privileged mode are not necessary. Here are some hints on how to use it.
Install (Ubuntu/Debian)
$ curl -L -o /tmp/docker-volume-netshare_0.34_amd64.deb https://github.com/ContainX/docker-volume-netshare/releases/download/v0.34/docker-volume-netshare_0.34_amd64.deb
$ sudo dpkg -i /tmp/docker-volume-netshare_0.34_amd64.deb
$ rm /tmp/docker-volume-netshare_0.34_amd64.deb
Configure
$ sudo vi /etc/default/docker-volume-netshare
enter as single setting
DKV_NETSHARE_OPTS="cifs --netrc=/root/"
then
$ sudo vi /root/.netrc
enter the following settings per host:
machine <host>
username <user>
password <password>
domain <domain>
Note that <host> must be a host name or an IP address followed by a colon (e.g. 10.20.30.4:)
Enable the volume driver as a systemd service
Note: if your OS does not support systemd, another method to install it as a service is necessary.
$ sudo systemctl enable docker-volume-netshare
Use a volume in docker run and docker service create
$ sudo docker run -it --rm --mount type=volume,volume-driver=cifs,source=<myvol>,destination=<absolute-path-in-container>,volume-opt=share=<ip>:/<share> ubuntu:zesty bash
$ sudo docker service create --name <name> --mount type=volume,volume-driver=cifs,source=<myvol>,destination=<absolute-path-in-container>,volume-opt=share=<host>/<share> <image>
Obviously it is not necessary to use the identical volume in multiple containers, because the volumes only map to a cifs share which in turn is shared among containers mounting it. As mentioned above, don't use docker volume create with this volume driver, as volumes are lost as soon as docker-volume-netshare is stopped and/or restarted (and hence on reboot).
Get help
$ docker-volume-netshare --help
$ docker-volume-netshare cifs --help
Logs
Hint: for debugging use DKV_NETSHARE_OPTS="cifs --netrc=/root/ --verbose" in /etc/default/docker-volume-netshare or stop the service and start docker-volume-netshare cifs --netrc=/root/ --verbose in a shell)
$ dmesg | tail
$ tail -50 /var/log/docker-volume-netshare.log
Resources
github
project
When I try to run chromium inside a docker container I see the following error: Gtk: cannot open display: :0
Dockerfile: (based on https://registry.hub.docker.com/u/jess/chromium/dockerfile)
FROM debian:jessie
# Install Chromium
RUN sed -i.bak 's/jessie main/jessie main contrib non-free/g' /etc/apt/sources.list && \
apt-get update && apt-get install -y \
chromium \
chromium-l10n \
libcanberra-gtk-module \
libexif-dev \
libpango1.0-0 \
libv4l-0 \
pepperflashplugin-nonfree \
--no-install-recommends && \
mkdir -p /etc/chromium.d/
# Autorun x11vnc
CMD ["/usr/bin/chromium", "--no-sandbox", "--user-data-dir=/data"]
build and run:
docker build -t chromium
docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --privileged chromium
and the error:
[1:1:0202/085603:ERROR:browser_main_loop.cc(164)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.
No protocol specified
[1:1:0202/085603:ERROR:browser_main_loop.cc(210)] Gtk: cannot open display: :0
i don't know much about chromium, but, I did work with X way back when :-) When you tell an X client to connect to :0, what you are saying is connect to port 6000 (or whatever your X server runs on) + 0, or port 6000 in this case. In fact, DISPLAY is IP:PORT (with the +6000 as mentioned above). The X server is running on your host, so, if you set:
DISPLAY=your_host_ip:0
that might work. However, X servers did not allow connections from just any old client, so, you will need to open up your X server. on your host, run
xhost +
before running the docker container. All of this is assuming you can run chromium on your host (that is, an X server exists on your host).
Try
xhost local:root
This solve mine, I am on Debian Jessie. https://github.com/jfrazelle/dockerfiles/issues/4
Adding as reference (see real answer from greg)
In your Linux host add
xhost +"local:docker#"
In Docker image add
RUN apt-get update
RUN apt-get install -qqy x11-apps
and then run
sudo docker run \
--rm \ # delete container when bash exits
-it \ # connect TTY
--privileged \
--env DISPLAY=unix$DISPLAY \ # export DISPLAY env variable for X server
-v $XAUTH:/root/.Xauthority \ # provide authority information to X server
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-v /home/alex/coding:/coding \
alexcpn/nvidia-cuda-grpc:1.0 bash
Inside the container -check a sample command
xclock
For Ubuntu 20.04, changing DISPLAY=:0 to DISPLAY=$DISPLAY fixed it for me, my local env had $DISPLAY set to :1:
docker run --rm -ti --net=host -e DISPLAY=$DISPLAY fr3nd/xeyes
So, I also had a requirement to open a graphical application within my docker container. So, these are the steps that worked for my environment.(Docker version: 19.03.12 , Container OS: Ubuntu 18.04).
Before running the container, make the host's X server accept connections from any client by running this command: xhost +. This is a very non-restrictive way to connect to the host's X server, and you can restrict as per the other answers given. Then, run the container with the --network=host option (E.g: docker run --network=host <my image name>). Once container is up, log in to its shell, and launch your app with DISPLAY=:0 (E.g: DISPLAY=:0 <my graphical app>)
I got it to work on a Windows host but not on my Linux Mint (Ubuntu) host. The reason was that I was using Docker Desktop on Linux, which uses a VM under the hood.
Solution: Shut down Docker Desktop and install Docker Engine. Other than that, also do as in the other answers.
What is needed is an alias for your docker-hostname to the outer hostname. When using a DISPLAY starting with just a : it means localhost. Basically, your hostname inside docker needs to resolve via /etc/hosts to the same name as the outer host - because that is the name that is stored in .Xauthority
I found this script to autoget ip of your pc:
FOR /F "tokens=4 delims= " %%i in ('route print ^| find " 0.0.0.0"') do set localIp=%%i
Create a bat file and put in this bat this:
FOR /F "tokens=4 delims= " %%i in ('route print ^| find " 0.0.0.0"') do set
localIp=%%i
docker run -ti -v /tmp/.X11-unix -v /tmp/.docker.xauth -e
XAUTHORITY=/tmp/.docker.xauth --net=host -e DISPLAY=%localIp%:0.0 your-container