X11 Authentication Error when run as a Docker container - docker

I'm trying to run an application from a Docker container which is supposed to open a GUI widow (a video stream in my case). The Docker container is run on a Raspberry Pi and I SSH into the Pi from my Mac and then I issue the Docker run command. I have one problem here:
When I run the whole thing as follows, it works flawlessly:
I run the command as:
docker run -it --net=host --device=/dev/vcsm --device=/dev/vchiq -e
DISPLAY -v /tmp/.X11-unix joesan/motion_detector bash
From the bash, that opens up after issuing the Docker run command, I install xauth
root#cctv:/raspi_motion_detection/project# apt-get install xauth
I then add the Xauth cookie using Xauth add
I then run my Python program which shows the GUI window with the video stream
So far so good. But, every time I don't want to be doing these steps all over again. So I wrote a small script to do this as below:
HOST=cctv
DISPLAY_NUMBER=$(echo $DISPLAY | cut -d. -f1 | cut -d: -f2)
echo $DISPLAY_NUMBER
# Extract auth cookie
AUTH_COOKIE=$(xauth list | grep "^$(hostname)/unix:${DISPLAY_NUMBER} " | awk '{print $3}')
# Add the xauth cookie to xauth
xauth add ${HOHOSTSTNAME}/unix:${DISPLAY_NUMBER} MIT-MAGIC-COOKIE-1 ${AUTH_COOKIE}
# Launch the container
docker run -it --net=host --device=/dev/vcsm --device=/dev/vchiq -e DISPLAY -v /tmp/.X11-unix joesan/motion_detector
But this time it fails with the error:
X11 connection rejected because of wrong authentication.
Unable to init server: Could not connect: Connection refused
I then tried to run the above script as a sudo user and I get the following:
xauth: file /root/.Xauthority does not exist
xauth: (argv):1: bad "add" command line
X11 connection rejected because of wrong authentication.
Unable to init server: Could not connect: Connection refused
Is there anything that I'm missing? Please help!

Related

Cannot Connect to DISPLAY from docker container

I am following a basic tutorial in which we run abiword (some gui) from a docker container. For some reason, the container cannot find my display.
I am running on Ubuntu 20.04, on a x86_64 machine.
My Dockerfile:
FROM ubuntu
RUN apt update && apt install -y abiword
CMD abiword
My build command:
docker build -t abiword .
Before my run command, I add docker to my authorized xhosts:
mylinux#mylinux:$ xhost +local:docker
non-network local connections being added to access control list
mylinux#mylinux:$ xhost
access control enabled, only authorized clients can connect
LOCAL:
SI:localuser:lu20
I also tried xhost + to disable access control altogether, but no luck.
Run commands I've tried:
docker run -e DISPLAY=$(hostname -I | awk '{print $1}')$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix abiword
docker run -e DISPLAY=unix$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix abiword
docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix abiword
The volume I'm mounting does exist:
mylinux#mylinux:$ ls /tmp/.X11-unix/
X0 X1
In all cases, I get the same output:
** (abiword:7): WARNING **: 13:46:51.920: clutter failed 0, get a life.
No DISPLAY: this may not be what you want.
Any help is appreciated.

How to connect to Remote X server with docker?

I'm trying to connect to the remote server with docker. Basically, I need a real physical display to run GUI tests. Now, the issue is that the server doesn't have a physical Monitor so I'm trying to use X Forwarding to be able to connect to my PC with Monitor.
So what I've done so far:
On the machine with Monitor.
1) $ touch /tmp/.docker.xauth
2) $ xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | xauth -f /tmp/.docker.xauth nmerge -
3) rsync -aztP /tmp/.docker.xauth user#server_in_clouds:/tmp/.docker.xauth
4) echo $DISPLAY (returns :1)
On the machine in Cloud.
1) docker run --rm -v /tmp/.X11-unix:/tmp/.X11-unix -v /tmp/.docker.xauth:/tmp/.docker.xauth -e XAUTHORITY=/tmp/.docker.xauth --gpus=all -e DISPLAY=IP_OF_THE_MACHINE_WITH_MONITOR:1 -e NVIDIA_DRIVER_CAPABILITIES=all DOCKER_IMAGE xclock
After some time it says that it can't connect to the IP_OF_THE_MACHINE_WITH_MONITOR:1
Should I do anything else? I believe I'm missing something. Maybe it is not required to have a physical Monitor in centos in order to do so?

How to kill/stop remote Docker container after disconnecting SSH

I have a remote docker container that I access over SSH. I start the container normally with the docker start command.
sudo docker start 1db52045d674
sudo docker exec -it 1db52045d674; bash
This starts an interactive tty in the container, which I access over ssh.
I'd like the container to kill itself if I close the SSH connection. Is there anyway to do this?
.bash_logout is executed every time you use exit command to end a terminal session.
So you can use this file to run the docker stop command when you exit the ssh connection on the remote server.
Create ~/.bash_logout file if not existing.
Add following command to stop the docker container in this file.
Example :
docker stop container_name
Note: If a user closes the terminal window instead of writing the exit command, this file is not executed.
I was hoping for a more elegant solution but in the end I launched a bash script over ssh to trap for a SIGHUP
something like:
trap 'docker stop CONTAINER_NAME' SIGHUP;
while sleep 5;
do echo "foo";
done;
so when the operator closes the SSH connection, the trap gets trigger and docker nicely stops
You can use the --init parameter for initializing. This way, your container will be able to take over the init process and you can send a kill signal to it: https://docs.docker.com/engine/reference/run/#specify-an-init-process
Start the server:
docker run --init \
-p 2222:2222 \
-e USER_NAME=user \
-e USER_PASSWORD=pass \
-e PASSWORD_ACCESS=true \
-e SUDO_ACCESS=true \
linuxserver/openssh-server
Just note the --init and -e SUDO_ACCESS=true parameters here.
In another (client) shell,
ssh into container:
$ ssh user#127.0.0.1 -p 2222 -oStrictHostKeyChecking=accept-new
Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
user#127.0.0.1's password:
Welcome to OpenSSH Server
2a. send kill signal to PID1 (docker-init):
$ sudo kill -s SIGINT 1
[sudo] password for user:
$ Connection to 127.0.0.1 closed by remote host.
Connection to 127.0.0.1 closed.
Container is gone.
I hope this helps.

Failed to download repo vuejs-templates/webpack: EACCES: permission denied, mkdir '/.vue-templates'

I'm trying to learn Docker. I installed Docker ToolBox on Windows 10 (I have Home Edition, so I can't use regular Docker installation because that Windows version doesn't have Hyper-v).
I tried this container from Docker Hub to use with the Vue.js framework:
docker pull ebiven/vue-cli
I added new alias as was written on container page (I changed name to vuejs because I have vue installed locally):
alias vuejs='docker run -it --rm -v "$PWD":"$PWD" -w "$PWD" -u "$(id -u)" ebiven/vue-cli vue'
And then in the console I wrote:
vuejs init webpack .
I got an error message:
vue-cli · Failed to download repo vuejs-templates/webpack: EACCES: permission denied, mkdir '/.vue-templates'
How to fix this?
After some tries, I realized that I need to remove that part of the command: -u "$(id -u)". Now it works properly.
So, it should be:
alias vuejs='docker run -it --rm -v "$PWD":"$PWD" -w "$PWD" ebiven/vue-cli vue'

Error "The input device is not a TTY"

I am running the following command from my Jenkinsfile. However, I get the error "The input device is not a TTY".
docker run -v $PWD:/foobar -it cloudfoundry/cflinuxfs2 /foobar/script.sh
Is there a way to run the script from the Jenkinsfile without doing interactive mode?
I basically have a file called script.sh that I would like to run inside the Docker container.
Remove the -it from your cli to make it non interactive and remove the TTY. If you don't need either, e.g. running your command inside of a Jenkins or cron script, you should do this.
Or you can change it to -i if you have input piped into the docker command that doesn't come from a TTY. If you have something like xyz | docker ... or docker ... <input in your command line, do this.
Or you can change it to -t if you want TTY support but don't have it available on the input device. Do this for apps that check for a TTY to enable color formatting of the output in your logs, or for when you later attach to the container with a proper terminal.
Or if you need an interactive terminal and aren't running in a terminal on Linux or MacOS, use a different command line interface. PowerShell is reported to include this support on Windows.
What is a TTY? It's a terminal interface that supports escape sequences, moving the cursor around, etc, that comes from the old days of dumb terminals attached to mainframes. Today it is provided by the Linux command terminals and ssh interfaces. See the wikipedia article for more details.
To see the difference of running a container with and without a TTY, run a container without one: docker run --rm -i ubuntu bash. From inside that container, install vim with apt-get update; apt-get install vim. Note the lack of a prompt. When running vim against a file, try to move the cursor around within the file.
For docker run DON'T USE -it flag
(as said BMitch)
And it's not exactly what you are asking, but would be also useful for others:
For docker-compose exec use -T flag!
The -T key would help people who are using docker-compose exec! (It disable pseudo-tty allocation)
For example:
docker-compose -f /srv/backend_bigdata/local.yml exec -T postgres backup
or
docker-compose exec -T mysql mysql -uuser_name -ppassword database_name < dir/to/db_backup.sql
For those who struggle with this error and git bash on Windows, just use PowerShell where -it works perfectly.
If you are using git bash on windows, you just need to put
winpty
before your 'docker line' :
winpty docker exec -it some_container bash
In order for docker to allocate a TTY (the -t option) you already need to be in a TTY when docker run is called. Jenkins executes its jobs not in a TTY.
Having said that, the script you are running within Jenkins you may also want to run locally. In that case it can be really convenient to have a TTY allocated so you can send signals like ctrl+c when running it locally.
To fix this make your script optionally use the -t option, like so:
test -t 1 && USE_TTY="-t"
docker run ${USE_TTY} ...
when using 'git bash',
1) I execute the command:
docker exec -it 726fe4999627 /bin/bash
I have the error:
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'
2) then, I execute the command:
winpty docker exec -it 726fe4999627 /bin/bash
I have another error:
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: \"D:/Git/usr/bin/
bash.exe\": stat D:/Git/usr/bin/bash.exe: no such file or directory": unknown
3) third, I execute the:
winpty docker exec -it 726fe4999627 bash
it worked.
when I using 'powershell', all worked well.
Using docker-compose exec -T fixed the problem for me via Jenkins
docker-compose exec -T containerName php script.php
Same Case Here, I am running the following command throw .sh script(bash) and python .py
However, I get the same error "The input device is not a TTY".
in my case, I'm trying to take the dump from a running container of my "production" env with authentication and passing with some arguments,
then take the output of .bak file of my mssql database container.
Remove -it from the command. If you want to keep it interactive then keep -i.
you can check my .sh file and a long command taking dump.
if using windows, try with cmd , for me it works. check if docker is started.
My Jenkins pipeline step shown below failed with the same error.
steps {
echo 'Building ...'
sh 'sh ./Tools/build.sh'
}
In my "build.sh" script file "docker run" command output this error when it was executed by Jenkins job. However it was working OK when the script ran in the shell terminal.The error happened because of -t option passed to docker run command that as I know tries to allocate terminal and fails if there is no terminal to allocate.
In my case I have changed the script to pass -t option only if a terminal could be detected. Here is the code after changes :
DOCKER_RUN_OPTIONS="-i --rm"
# Only allocate tty if we detect one
if [ -t 0 ] && [ -t 1 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
fi
docker run $DOCKER_RUN_OPTIONS --name my-container-name my-image-tag
I know this is not directly answering the question at hand but for anyone that comes upon this question who is using WSL running Docker for windows and cmder or conemu.
The trick is not to use Docker which is installed on windows at /mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe but rather to install the ubuntu/linux Docker. It's worth pointing out that you can't run Docker itself from within WSL but you can connect to Docker for windows from the linux Docker client.
Install Docker on Linux
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
Connect to Docker for windows on the port 2375 which needs to be enabled from the settings in docker for windows.
docker -H localhost:2375 run -it -v /mnt/c/code:/var/app -w "/var/app" centos:7
Or set the docker_host variable which will allow you to omit the -H switch
export DOCKER_HOST=tcp://localhost:2375
You should now be able to connect interactively with a tty terminal session.
In Jenkins, I'm using docker-compose exec -T
eg:-
docker-compose exec -T app php artisan migrate
winpty works as long as you don't specify volumes to be mounted such as .:/mountpoint or ${pwd}:/mountpoint
The best workaround I have found is to use the git-bash plugin inside Visual Code Studio and use the terminal to start and stop containers or docker-compose.
For those using Pyinvoke see this documentation which I'll syndicate here in case the link dies:
99% of the time, adding pty=True to your run call will make things work as you were expecting. Read on for why this is (and why pty=True is not the default).
Command-line programs often change behavior depending on whether a controlling terminal is present; a common example is the use or disuse of colored output. When the recipient of your output is a human at a terminal, you may want to use color, tailor line length to match terminal width, etc.
Conversely, when your output is being sent to another program (shell pipe, CI server, file, etc) color escape codes and other terminal-specific behaviors can result in unwanted garbage.
Invoke’s use cases span both of the above - sometimes you only want data displayed directly, sometimes you only want to capture it as a string; often you want both. Because of this, there is no “correct” default behavior re: use of a pseudo-terminal - some large chunk of use cases will be inconvenienced either way.
For use cases which don’t care, direct invocation without a pseudo-terminal is faster & cleaner, so it is the default.
Instead of using -it use --tty
So your docker run should look like this:
docker run -v $PWD:/foobar --tty cloudfoundry/cflinuxfs2 /foobar/script.sh
use only -i flag than -it flag. which can help you to see what going on inside container.
docker exec -i $USER bash <<EOF
apt install nano -y
EOF
you might see the warning but it shows you output on the terminal inside docker.

Resources