Docker cannot connect to X server - docker

I have created a docker image for opencv and facial reckognition to simplify the setup process.
But the recognize.py script needs X Server to show the image result. Here is what I have done so far:
sudo docker run -t -d --name opencv opencv:latest
sudo docker exec -it opencv bash /extract-embeddings.sh
sudo docker exec -it opencv bash /train-model.sh
All is fine so far. The last step is the actual comparison that displays the result in an image.
sudo docker exec -it opencv bash /face-recognition.sh
It gives the output:
[INFO] loading face detector...
[INFO] loading face recognizer...
No protocol specified
: cannot connect to X server :0
I have tried running the container with the following command:
sudo docker run -t -d --name opencv -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix opencv:latest
But it doesn't help.

Try running this,
xhost +
sudo docker run --rm -ti --net=host --ipc=host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --env="QT_X11_NO_MITSHM=1" <image_name> <arguments>
Other might face issue regarding the image not getting rendered on screen or getting a blank screen with no image, for them add --env="_X11_NO_MITSHM=1" to the above script while running the docker image. It will solve the problem.
For further information, I would recommend you guys check out the below references.
Reference 1
Reference 2

It looks like the xauth is the issue for viewing of the image.
The details are at Can you run GUI applications in a Docker container?

It may happen that also the XAuthority is needed.
First, make sure that the host's $XAUTHORITY is defined.
And second, add the following parameters to the docker run command:
-v $XAUTHORITY:/tmp/.XAuthority -e XAUTHORITY=/tmp/.XAuthority
An example of a complete command:
sudo docker run --rm -ti --net=host --ipc=host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $XAUTHORITY:/tmp/.XAuthority -e XAUTHORITY=/tmp/.XAuthority --env="QT_X11_NO_MITSHM=1" <image_name> <arguments>

Related

What does it mean when Docker is simultaneously run in interactive and detatched modess

I'm new to Docker and came across this confusing (to me) command in one of the Docker online manuals (https://docs.docker.com/storage/bind-mounts/):
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
What I found confusing was the use of both the -it flag and the -d flag. I thought -d means to run the container in the background, but -it means to allow the user to interact with the container via the current shell. What does it mean that both flags are present? What am I not understanding here?
The -i and -t flags influence how stdin and stdout are connected, even in the presence of the -d flag. Furthermore, you can always attach to a container in the future using the docker attach command.
Consider: If I try to start an interactive shell without passing -i...
$ docker run -d --name demo alpine sh
...the container will exit immediately, because stdin is closed. If I want to run that detached, I need:
$ docker run -itd --name demo alpine sh
This allows me to attach to the container in the future and interact with the shell:
$ docker attach demo
/ #

why would we want to use both --detach switch with --interactive and --tty in docker?

I'm reading the docker documentations, and I've seen this command:
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
As far as I know, using -d or --detach switch run the command outside of the current terminal emulator, and return the control of terminal back to the user. And also using --tty -t and --interactive -i is completely the opposite. Why would anyone want to use them in a command?
For that specific command, it doesn't make sense, since nginx does not have an interactive component. But in general, it allows you to later attach to the container with docker attach. E.g.
$ docker run --name test-no-input -d busybox /bin/sh
92c0447e0c19de090847b7a36657d3713e3795b72e413576e25ab2ce4074d64b
$ docker attach test-no-input
You cannot attach to a stopped container, start it first
$ docker run --name test-input -dit busybox /bin/sh
57e4adcc14878261f64d10eb7839b35d5fa65c841bbcb3cd81b6bf5b8fe9d184
$ docker attach test-input
/ # echo hello from the container
hello from the container
/ # exit
The first container stopped since it was running a shell, and there was no input on stdin (no -i). A shell exits when it finishes reading input (e.g. the end of a shell script).

Docker cant find file location in windows 10

I am trying to run a software for predicting hemorrhage volume on brain CT in docker: https://github.com/msharrock/deepbleed
I created a "deepbleed" folder in my D:\ drive on windows, and ran docker pull msharrock/deepbleed command after I cd'd inside that directory. The pull was successful and I can see the container in my docker desktop app.
Then I went on and created an indir and outdir folder as instructed in documentation; placed my CT file for prediction in the indir folder.
The readme tells me to run this command next:
docker run -it msharrock/deepbleed bash -v /path/to/data:/data/
So I have run the following commands, but I get "no such file or directory" for all of them:
docker run --rm -it msharrock/deepbleed bash -v pwd/deepbleed/indir:outdir
docker run --rm -it msharrock/deepbleed bash -v ~/deepbleed/indir:/outdir/
docker run --rm -it msharrock/deepbleed bash -v /mnt/d/deepbleed/indir:/outdir/
docker run --rm -it msharrock/deepbleed bash -v /d/deepbleed/indir:/outdir
docker run --rm -it msharrock/deepbleed bash -v "$(& "D:\deepbleed\indir" "$(pwd)")":/outdir
docker run --rm -it msharrock/deepbleed bash -v /indir/:/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d:/deepbleed/indir://d:/deepbleed/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d/deepbleed/indir://d/deepbleed/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d/deepbleed/indir:/outdir/
My docker is running on a wsl2 based engine in windows 10, the hyper-v folders for disks and virtual machines are located on my d: drive.
What do I need to do to get this running?
Try doing it like this (just using one of your items in the list for this example to give you the idea):
docker run -rm -it -v /mnt/d/deepbleed/indir:/outdir msharrock/deepbleed bash

How to run commands in Docker container

Hello I m trying to follow the step by step guid to build jpeg xl (I m on windows and try to build a x64 version for linux)
after:
docker run -u root:root -it --rm -v C:\Users\fred\source\tools\jpegxl\jpeg-xl-master -w /jpeg-xl gcr.io/jpegxl/jpegxl-builder
I have the container running but I don't know how to run the command inside :
CC=clang-6.0 CXX=clang++-6.0 ./ci.sh opt
I tried CC=clang-6.0 CXX=clang++-6.0 ./ci.sh opt and I get ./ci.sh: No such file or directory no command seems to work when I do "ls" it display nothing
Does someone knows how to get this to build?
Make sure that you start a bash terminal inside the container:
docker run -it <image> /bin/bash
I believe /bin/bash is missing from your docker run command. As a result, you are executing the command for clang inside your own environment, not the container.
You can set the environment variables by using -e
Example
-e CC=clang-6.0 -e CXX=clang++-6.0
The full command to log in into your container:
docker run -u root:root -it --rm -e CC=clang-6.0 -e CXX=clang++-6.0 -v C:\Users\fred\source\tools\jpegxl\jpeg-xl-master -w /jpeg-xl gcr.io/jpegxl/jpegxl-builder /bin/bash
They have updated the image without updating the command so the command is
CC=clang-7 CXX=clang++-7 ./ci.sh opt
The discution is here:
Can't build from docker image "Unknown clang version"

Virtualbox inside Docker

I'm trying to get VirtualBox to run inside of Docker. I'm using this: https://registry.hub.docker.com/u/jess/virtualbox/dockerfile/.
When I run the command:
sudo docker run -d \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix$DISPLAY \
--privileged \
--name virtualbox \
jess/virtualbox
It adds virtualbox inside a container. When I run sudo docker start container_id, it echoes back the container_id but doesn't add it to the running containers. I check with sudo docker ps and it is not there; however, it is there with sudo docker ps -a.
What am I doing wrong? I get no errors either.
EDIT: I'm running Docker in Ubuntu 15.04 (Not inside VirtualBox)
You have to let docker to connect to your local X server. There are different ways to do this. One straight way is running xhost +local:docker before running your container (i.e.: before docker run).

Resources