Docker not mapping folders - docker

I am connected in a Docker container. In this container, when I execute:
sudo docker run --entrypoint bash -it -v /home/jenkins/workspace/deployment:/app myregistry.com/ansible-shade:2.2.1.0
I can see mapping is not happening between /home/jenkins/workspace/deployment and /app
In /home/jenkins/workspace/deployment, there is a lot of files, but in /app, I can't see anything
Any idea why does it happen ?

I found out what the problem,
My use case is that I work in a centOS that has all files, then I ssh a ubuntu that has no file and then in this ubuntu, I connect to docker container, so source path is from ubuntu not the docker image ( where files are )
The solution in that case is having files in the ubuntu vm so it can be mounted

Related

Windows WSL2 docker.exe volume mount differs from wsl docker volume mounts

I got Docker Desktop installed on Windows with WSL2 support. Everything works as expected. When I run my containers with a volume mount docker run -it --rm -v W:\projects:/projects busybox i can access all my windows files inside this folder.
Sadly the performance isn't that great with windows shares inside docker, so i tried to mount a path from my wsl machine.
i was under the impression that docker would run inside wsl? So I expected the two commands to output the same:
docker run -it --rm -v /home/:/myHome busybox ls -l /myHome
wsl docker run -it --rm -v /home/:/myHome busybox ls -l /myHome
but the output using docker is just total 0 where as the output using wsl is my home directory.
Can someone explain to me where this /home directory is (physically / in wsl / my computer) when I run docker from windows? And is it possible to run docker and it runs wsl docker without symlinks / path modifications so i can mount my linux directory inside the container?
If wsl-2 is installed, you can access its file system by going to the following path :-
\\wsl$
/home wouldn't just work as its not physically present in Windows's file system
You can however use /home or any other linux based directories if you login to your wsl distro. Please note that the following command won't mount any volumes if you run it from windows. It should be run only from your wsl distro
docker run --name mycontainer -v /home:myhome busybox
To access the /home directory in an Ubuntu-16.04 distro from windows:-
\\wsl$\Ubuntu-16.04\home
You can replace Ubuntu-16.04 with your distro name - version
To mount any of the directories which is under wsl, ensure that you have turned on the option "Enable integration with my default wsl distro"
https://docs.docker.com/docker-for-windows/wsl/
To mount a wsl's directory from windows as a volume, provide your host volume path in the given format
docker run --name mycontainer -v \\wsl$\Ubuntu-16.04\home:/myHome busybox
Basically, docker run -v has an effect from which environment its being executed i.e either windows or wsl
And docker volumes are present in the following path if you have enabled wsl-2 for docker but don't want to use your distro's file system
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\

Docker bind mount directory in /tmp not working

I'm trying to mount a directory in /tmp to a directory in a container, namely /test. To do this, I've run:
docker run --rm -it -v /tmp/tmpl42ydir5/:/test alpine:latest ls /test
I expect to see a few files when I do this, but instead I see nothing at all.
I tried moving the folder into my home directory and running again:
docker run --rm -it -v /home/theolodus/tmpl42ydir5/:/test alpine:latest ls /test
at which point I see the expected output. This makes me thing I have mis-configured something and/or the permissions have bitten me. Have I missed a step in installing docker? I did it via sudo snap install docker, and then configured docker to let me run as non-root by adding myself to the docker group. Running as root doesn't help...
Host machine is Ubuntu 20.04, docker version is 19.03.11
When running docker as a snap...
all files that Docker uses, such as dockerfiles, to be in $HOME.
Ref: https://snapcraft.io/docker
The /tmp filesystem simply isn't accessible to the docker engine when it's running within the snap isolation. You can install docker directly on Ubuntu from the upstream Docker repositories for a more traditional behavior.

How can I make Docker Images / Volumes (Flask, Python) accessible for my host machine (macOS)?

I am running the latest macOS (Sierra) with Docker and Kitematic installed. I am also using Virtualbox for emulation.
I want to use the uwsgi-nginx-flask image but I have no idea how I can make the python files and the nginx directory inside my container accessible from outside the virtual machine ?
Haven't found anything about that on the website either.
Folders between the host and containers can be mapped and mounted by using the -v tag during runtime.
$ docker run -it -v /host/directory:/container/directory imagename:tag
You can alternatively use docker cp to copy stuff inside and outside of the container. For example
$ docker cp /path/to/file ContainerName:/path/inside/container
or
$ docker cp ContainerName:/path/inside/container/file .
you can mount the host directory to docker container which will be shared between host and docker
docker run --name container_image -d -v ~/host_dir:/container_dir docker_image

How to Copy Files From Docker Ubuntu Container to Windows Host

I can't figure out how to copy files from a docker ubuntu container to a windows host, or vice versa.
My host is Windows 10. When I start Docker, I run the Ubuntu image using
docker run -it ubuntu bash
The documentation I've read says that the way the transfer files is with docker cp, but apparently that command doesn't exist in this ubuntu image, i.e., bash: docker: command not found.
This must be a dumb oversight on my part. Can someone please give me a little help?
You need to run docker cp command on host machine.
The command template is:
docker cp <containerId>:<src_path_inside_container> <target_host_path>

mount a host volume to a container created through Dockerfile

New to docker, and as per the documentation about Dockerfile, due to portability, it is not allowed to specify a host volume mapping. That is fine, but is there a way to map a host volume (I am in MAC, so say, my home dir /Users/bsr to /data of ubuntu image) to a linux container. The documentation of docker volume is talking only about docker run, but not sure how to add a volume after creating it.
http://docs.docker.com/userguide/dockervolumes/
On Linux you can simply mount a directory of your host system to a docker container by passing
-v /path/to/host/directory:/path/to/container/directory
to the docker run command.
You can also see it here in the documentation: https://docs.docker.com/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
If you are using boot2docker things are more complicated. The problem ist that boot2docker runs a little linux vm to start docker. So if you mount the volume as described above you will mount the directory of the little linux vm.
A workaround for this is described in the README of the boot2docker GitHub page using a samba share:
https://github.com/boot2docker/boot2docker#folder-sharing
the following worked, with the help of #sciutand.
git clone https://github.com/boot2docker/boot2docker.git
cd boot2docker/
docker build -t my-boot2docker-img .
docker run --rm my-boot2docker-img > boot2docker.iso
boot2docker stop
mv ~/.boot2docker/boot2docker.iso ~/.boot2docker/boot2docker.iso.backup
mv boot2docker.iso ~/.boot2docker/boot2docker.iso
VBoxManage sharedfolder add boot2docker-vm -name /Users -hostpath /Users
boot2docker up
docker run -d -P --name web ubuntu

Resources