Docker process from container starts on host and vice versa - docker

I have a host with ubuntu 20.04, and I run firefox in container from ubuntu:20.04 image.
When firefox is already started on the host: container stops immediately, new window of firefox appears, and I can see all my host browsing history, sessions and so on.
When firefox is NOT started on the host: container is running, new window of "firefox [container hash]" appears, I can see only container browsing history and sessions there (as expected). BUT when I start firefox on the host while container is still running: new window of "firefox [same container hash]" appears, and I can see only container browsing history and sessions.
If I run firefox as a different user, like
sudo -H -u some-user firefox
and having umask 077 - I've got perfect isolation and parallel running without docker, but that's not the full goal
My dockerfile:
FROM ubuntu:20.04
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y firefox
CMD firefox
Terminal history:
xhost +local:docker
docker build -t firefox .
docker create -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name ff firefox
docker start ff
I suppose this behavior of process launch from container is not really obvious and expected. Could you please explain what exactly is happening and why?

Docker container is not an isolated machine. The commands that run inside docker container are executed on the host machine (or the docker VM if using Docker for Mac).
This can be verified in the following way:
Run a command inside docker container docker exec -it <container-name> sleep 100
On the host machine, grep for this command ps -ef | grep sleep. For mac, docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh will provide a shell into the running docker VM.
On my machine:
# ps -ef | grep sleep
2609 root 0:00 sleep 100
2616 root 0:00 grep sleep
When you run a daemon, it creates a socket file in temp directory.
This file is the gateway to communication with the application.
For instance, when mysql is running in the system, it creates a socket file /var/run/mysqld/mysqld.sock which is used for communication by mysql client.
These daemons can also bind to a port, and be accessed through the network this way. These ports are simply socket connections to your application which are visible over the network.
Coming back to your question,
docker create -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name ff firefox
/tmp/.X11-unix is managing Unix-domain sockets. Since this is mounted within the container, the socket space between the container and host is shared.
When firefox is running on the host, the socket is occupied already. Thus the container fails to start
When firefox is not running on host and container is started, the socket is free and hence the container is able to start. This uses the filesystem inside container to store history etc. Thus you do not see the history from host.
If you run firefox from host now, it will simply connect to this unix socket and launch a firefox window.

Related

Can't save file on remote Jupyter server running in docker container

I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?
It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).

why can i not run a X11 application?

So, as the title states, I'm a docker newbie.
I downloaded and installed the archlinux/base container which seems to work great so far. I've setup a few things, and installed some packages (including xeyes) and I now would like to launch xeyes. For that I found out the CONTAINER ID by running docker ps and then used that ID in my exec command which looks now like:
$ docker exec -it -e DISPLAY=$DISPLAY 4cae1ff56eb1 xeyes
Error: Can't open display: :0
Why does it still not work though? Also, how can I stop my running instance without losing its configured state? Previously I have exited the container and all my configuration and software installations were gone when I restarted it. That was not desired. How do I handle this correctly?
Concerning the X Display you need to share the xserver socket (note: docker can't bind mount a volume during an exec) and set the $DISPLAY (example Dockerfile):
FROM archlinux/base
RUN pacman -Syyu --noconfirm xorg-xeyes
ENTRYPOINT ["xeyes"]
Build the docker image: docker build --rm --network host -t so:57733715 .
Run the docker container: docker run --rm -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY so:57733715
Note: in case of No protocol specified errors you could disable host checking with xhost + but there is a warning to that (man xhost for additional information).

Docker container exits when using -it option

Consider the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y apache2 && \
apt-get clean
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
When running the container with the command docker run -p 8080:80 <image-id>, then the container starts and remains running, allowing the default Apache web page to be accessed on https://localhost:8080 from the host as expected. With this run command however, I am not able to quit the container using Ctrl+C, also as expected, since the container was not launched with the -it option. Now, if the -it option is added to the run command, then the container exits immediately after startup. Why is that? Is there an elegant way to have apache run in the foreground while exiting on Ctrl+C?
This behaviour is caused by Apache and it is not an issue with Docker. Apache is designed to shut down gracefully when it receives the SIGWINCH signal. When running the container interactively, the SIGWINCH signal is passed from the host to the container, effectively signalling Apache to shut down gracefully. On some hosts the container may exit immediately after it is started. On other hosts the container may stay running until the terminal window is resized.
It is possible to confirm that this is the source of the issue after the container exits by reviewing the Apache log file as follows:
# Run container interactively:
docker run -it <image-id>
# Get the ID of the container after it exits:
docker ps -a
# Copy the Apache log file from the container to the host:
docker cp <container-id>:/var/log/apache2/error.log .
# Use any text editor to review the log file:
vim error.log
# The last line in the log file should contain the following:
AH00492: caught SIGWINCH, shutting down gracefully
Sources:
https://bz.apache.org/bugzilla/show_bug.cgi?id=50669
https://bugzilla.redhat.com/show_bug.cgi?id=1212224
https://github.com/docker-library/httpd/issues/9
All that you need to do is pass the -d option to the run command:
docker run -d -p 8080:80 my-container
As yamenk mentioned, daemonizing works because you send it to the background and decouple the window resizing.
Since the follow-up post mentioned that running in the foreground may have been desirable, there is a good way to simulate that experience after daemonizing:
docker logs -f container-name
This will drop the usual stdout like "GET / HTTP..." connection messages back onto the console so you can watch them flow.
Now you can resize the window and stuff and still see your troubleshooting info.
I am also experiencing this problem on wsl2 under Windows 10, Docker Engine v20.10.7
Workaround:
# start bash in httpd container:
docker run --rm -ti -p 80:80 httpd:2.4.48 /bin/bash
# inside container execute:
httpd -D FOREGROUND
Now Apache httpd keeps running until you press CTRL-C or resize(?!) the terminal window.
After closing httpd, type:
exit
to leave the container
A workaround is to pipe the output to cat:
docker run -it -p 8080:80 <image-id> | cat
NOTE: It is important to use -i and -t.
Ctrl+C will work and resizing the terminal will not shut down Apache.

Docker GUI app (xterm window) from VNC host

I've built a very basic docker container to try and proof of concept running an xterm window from inside it.
In it, I have a basic install of RHEL 7.3 and xterm
I build as normal, open xhost xhost + and then run the docker run command like so:
docker run -ti --rm -e DISPLAY=${DISPLAY} -v /tmp/.X11-unix:/tmp.X11-unix xtermDemo /bin/bash
This runs perfectly when my base host is linux. The problem is that most of the developers in my organization run with a Windows/Mac host and log into a VNC session. When running the docker image from the VNC session xterm can’t run.
Any ideas? My only hunch at the moment is that the VNC Xorg isn't being ran natively and that somehow is causing the issue.

Docker upgrade link container

I would like to be able to upgrade container without restarting all other containers that are linked to it.
According to this
https://docs.docker.com/userguide/dockerlinks/#container-linking
If you restart the source container, the linked containers /etc/hosts
files will be automatically updated with the source container's new IP
address, allowing linked communication to continue.
Sounds great, but I don't want to just restart. I need to upgrade to newer version. And its not working.
Lets see this example from article above:
sudo docker run -d --name db training/postgres
sudo docker run -t -i --rm --link db:db training/webapp /bin/bash
cat /etc/hosts
Restart db container:
sudo docker restart db
and inside running container cat /etc/hosts will show new ip address for db.
But what I want:
sudo docker stop db
sudo docker rm db
sudo docker run -d --name db training/postgres:new_version
And now inside running container cat /etc/hosts will show old ip address for db. Link is broken.
Is it any way to overcome this issue?
By the way, all my containers run on the same host, so ambassadors are not an option.

Resources