Open/Edit/Save from on the host with app running in container - docker

We recently have a lot of problem deploying the Linux version of our app for the client (updated libraries, missing libraries, install path), and we are looking to use Docker for deployment.
Our app as a UI, so we naturally map that using
-e DISPLAY:$DISPLAY -v /tmp/X11-unix:/tmp/X11-unix
and we can actually see the UI popping up.
But when it's time to open a file, the problem start there. We want to only browse the host system and save any output file on the host (output directory is determined by the location of the opened file).
Which strategy would you suggest for this?
We want the client to not see the difference between the app running locally or inside Docker. We are working on a launch script so it looks like the client would still be double-clicking on it to start the app. We can add all the configuration we need in there for the docker run command.

After recommendations by #CFrei and #Robert here's a solution that seems to work well:
docker run \
-ti \
-u $(id -u):$(id -g) \
-v /home/$(id -un):/home/$(id -un) \
-v /etc/passwd:/etc/passwd \
-w $(pwd) \
MyDockerImage
And now, every file created inside that container is perfectly located in the right directory on the host with the ownership by the user.
And from inside the container, it really looks like the host, which will be very useful for the client.
Thanks again for your help guys!
And I hope this can help someone else.

As you may know, the container hast its own filesystem that is provided by the image where it runs on top.
You can map a host's directory or file to a path inside the container, where your program expects it to be. This is known as docker volume. You're already doing that with the X11 socket communication (the -v flag).
For instance, for a file:
docker run -v /absolute/path/in/the/host/some.file:/path/inside/container/some.file
For a directory:
docker run -v /absolute/path/in/the/host/some/dir:/path/inside/container/some/dir
You can provide as many -v flags as you might need.
Here you can find more useful information.

Related

How to show GUI apps from docker desktop container on windows 11

From this article, it states that windows 11 natively supports running of X11 and wayland applications on wsl.
I tried to do the same through a docker container, settinng the environment variable DISPLAY="host.docker.internal:0.0", and running a gui application (like gedit). But instead I got this error:
Unable to init server: Could not connect: Connection refused
Gtk-WARNING **: 17:05:50.416: cannot open display: host.docker.internal:0.0
I stumbled upon your question while attempting the same thing as you are and acctually got it to work with aid of this blog post on Microsoft. I use a minimal Dockerfile based on Ubuntu and installs gedit:
FROM ubuntu:22.04
RUN apt update -y && apt install -y gedit
CMD ["gedit"]
Create the image the usual way, e.g. docker build . -t guitest:1.0
On the WSL command line, start it like this:
docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix \
-v /mnt/wslg:/mnt/wslg \
-e DISPLAY \
-e WAYLAND_DISPLAY \
-e XDG_RUNTIME_DIR \
-e PULSE_SERVER \
guitest:1.0
I hope this is to good use for you as well.
This answer is heavily based on what chrillof has said. Thanks for the excellent start!
The critical things here for Docker Desktop users on Windows with WSL2 are that:
The container host (i.e. docker-desktop-data WSL2 distribution) does not have a /tmp/.X11-unix itself. This folder is actually found in the /mnt/host/wslg/.X11-unix folder on the docker-desktop distribution which translates to /run/desktop/mnt/host/wslg/.X11-unix when running containers.
There are no baked-in environment variables to assist you, so you need to specify the environment variables explicitly with these folders in mind.
I found this GitHub issue where someone had to manually set environment variables which allowed me to connect the dots between what others experience directly on WSL2 and chrillof's solution
Therefore, modifying chrillof's solution using PowerShell from the host, it looks more like:
docker run -it -v /run/desktop/mnt/host/wslg/.X11-unix:/tmp/.X11-unix `
-v /run/desktop/mnt/host/wslg:/mnt/wslg `
-e DISPLAY=:0 `
-e WAYLAND_DISPLAY=wayland-0 `
-e XDG_RUNTIME_DIR=/mnt/wslg/runtime-dir `
-e PULSE_SERVER=/mnt/wslg/PulseServer `
guitest:1.0
On my computer, it looks like
this (demo of WSLg X11)
To be clear, I have not checked if audio is functional or not, but this does allow you to avoid the installation of another X11 server if you already have WSL2 installed.

Docker Issue: Removing a Bind-Mounted Volume

I have been unable to find any help online for this simple mistake I made, so I was looking for some help. I am using a server to run a docker image in a container and I mistyped and have caused an annoyance for myself. I ran the command
docker run --rm -v typo:location docker_name
and since I had a typo with the directory to mount it created a directory on the host machine, and when the container ended the directory remained. I tried to remove it, but i just get the error
rm -rf typo/
rm: cannot remove 'typo': Permission denied
I know now that I should have used --mount instead of -v for safety, but the damage is done; how can I remove this directory without having access to the container that created it?
I apologize in advance, my knowledge of docker is quite limited. I have mostly learned it only to use a particular image and I do a bunch of Google searches to do the rest.
The first rule of Docker security is, if you can run any docker command at all, you can get unrestricted root access over the entire host.
So you can fix this issue by running a container, running as root and bind-mounting the parent directory, that can delete the directory in question:
docker run \
--rm \
-v "$PWD:/pwd" \
busybox \
rm -rf /pwd/typo
I do not have sudo permissions
You can fix that
docker run --rm -v /:/host busybox vi /host/etc/sudoers
(This has implications in a lot of places. Don't indiscriminately add users to a docker group, particularly on a multi-user system, since they can trivially get root access. Be careful publishing the host's Docker socket into a container, since the container can then root the host; perhaps redesign your application to avoid needing it. Definitely do not expose the Docker socket over the network.)

Copying a file from container to locally by using volume mounts

Trying to copy files from the container to the local first
So, I have a custom Dockerfile, RUN mkdir /test1 && touch /test1/1.txt and then I build my image and I have created an empty folder in local path /root/test1
and docker run -d --name container1 -v /root/test1:/test1 Image:1
I tried to copy files from containers to the local folder, and I wanted to use it later on. but it is taking my local folder as a preceding and making my container empty.
Could you please someone help me here?
For example, I have built my own custom Jenkins file, for the first time while launching it I need to copy all the configurations and changes locally from the container, and later if wanted to delete my container and launch it again don't need to configure from the scratch.
Thanks,
The relatively new --mount flag replaces the -v/--volume mount. It's easier to understand (syntactically) and is also more verbose (see https://docs.docker.com/storage/volumes/).
You can mount and copy with:
docker run -i \
--rm \
--mount type=bind,source="$(pwd)"/root/test1,target=/test1 \
/bin/bash << COMMANDS
cp <files> /test1
COMMANDS
where you need to adjust the cp command to your needs. I'm not sure if you need the "$(pwd)" part.
Off the top, without testing to confirm, i think it is
docker cp container1:/path/on/container/filename /path/on/hostmachine/
EDIT: Yes that should work. Also "container1" is used here because that was the container's name provided in the example
In general it works like this
container to host
docker cp containername:/containerpath/ /hostpath/
host to container
docker cp /hostpath/ containername:/containerpath/

Unable to docker run

I am trying to setup an image of the osrm-backend on my docker. I am unable to run docker using the below commands (as mentioned in wiki)
docker run -t -v ${pwd}/data osrm/osrm-backend:v5.18.0 osrm-extract -p /opt/car.lua /data/denmark-latest.osm.pbf
docker run -t -v ${pwd}:/data osrm/osrm-backend:v5.18.0 osrm-contract /data/denmark-latest.osrm
docker run -t -i -p 5000:5000 -v ${pwd}/data osrm/osrm-backend:v5.18.0 osrm-routed /data/denmark-latest.osrm
I have already fetched the corresponding map using both wget and Invoke-WebRequest. Every time I run the first command from the above, it gives the error...
[error] Input file /data/denmark-latest.osm.pbf not found!
I have tried placing the downloaded maps in the corresponding location as well. Can anyone tell me what I am doing wrong here ?
I am using PowerShell on Windows 10
For me the problem was that docker was not able to access the C drive, even though the sharing was turned on in docker settings. After wasting lots of time, I turned off the sharing of C drive, and then turned it back on. After that when I mounted some folder to docker, it was able to see the files.
Docker share drive

How to sync dir from a container to a dir from the host?

I'm using vagrant so the container is inside vm. Below is my shell provision:
#!/bin/bash
CONFDIR='/apps/hf/hf-container-scripts'
REGISTRY="tutum.co/xxx"
VER_APP="0.1"
NAME=app
cd $CONFDIR
sudo docker login -u xxx -p xxx -e xxx#gmail.com tutum.co
sudo docker build -t $REGISTRY/$NAME:$VER_APP .
sudo docker run -it --rm -v /apps/hf:/hf $REGISTRY/$NAME:$VER_APP
Everything runs fine and the image is built. However, the syncing command(the last one) above doesn't seem to work. I checked in the container, /hf directory exists and it has files in it.
One other problem also is if I manually execute the syncing command, it succeeds but I can only see the files from host if I ls /hf. It seems that docker empties /hf and places the files from the host into it. I want it the other way around or better yet, merge them.
Yeah, that's just how volumes work I'm afraid. Basically, a volume is saying, "don't use the container file-system for this directory, instead use this directory from the host".
If you want to copy files out of the container and onto the host, you can use the docker cp command.
If you tell us what you're trying to do, perhaps we can suggest a better alternative.

Resources