I have a simple example set up, running a centos or ubuntu image I've discovered that all my symlinks inside of a mounted volume are broken.
given the directory structure
testsyms
real
--one
--twoHundred
--three
syms
--one
--twoHundred
--three
and using the following docker command to start my container
docker run -ti -v $HOME/testsyms/:$HOME/testsyms -w $HOME/testsyms
I then do the following
inside the container :
[root#96b9af1cd545 testsyms]# ls -l **/*
-rw-r--r-- 1 501 games 0 Jan 8 06:00 real/one
-rw-r--r-- 1 501 games 0 Jan 8 06:03 real/three
-rw-r--r-- 1 501 games 0 Jan 8 06:00 real/twoHundred
lrwxr-xr-x 1 501 games 11 Jan 8 06:00 syms/one -> l/one
lrwxr-xr-x 1 501 games 19 Jan 8 06:03 syms/three -> ../real/three
lrwxr-xr-x 1 501 games 18 Jan 8 06:01 syms/twoHundred -> l/twoHundred
outside the container :
tam#tam-osx:testsyms$ ls -l **/*
-rw-r--r-- 1 tam staff 0 Jan 7 23:00 real/one
-rw-r--r-- 1 tam staff 0 Jan 7 23:03 real/three
-rw-r--r-- 1 tam staff 0 Jan 7 23:00 real/twoHundred
lrwxr-xr-x 1 tam staff 11 Jan 7 23:00 syms/one -> ../real/one
lrwxr-xr-x 1 tam staff 19 Jan 7 23:03 syms/three -> /Users../real/three
lrwxr-xr-x 1 tam staff 18 Jan 7 23:01 syms/twoHundred -> ../real/twoHundred
I created the links one and twoHundred outside the container while I created link three inside the container. inside the container links one and TwoHundred are broken. outside the container link three is broken as you should be able to see from the above outputs.
UPDATE--
Base on the comments I tried to ssh into the docker machine and found that the links are both correct, and incorrect. Doing some digging I find that I have my shared folder Users in 2 places. I have a /Users directory and I have a /mnt/hgfs/Users directory. Here is the output of each directory
/Users/ :
docker#default:/mnt/hgfs$ ls -l /Users/boger/testsyms/**/*
-rw-r--r-- 1 501 20 0 Jan 8 06:00 /Users/boger/testsyms/real/one
-rw-r--r-- 1 501 20 0 Jan 8 06:03 /Users/boger/testsyms/real/three
-rw-r--r-- 1 501 20 0 Jan 8 06:00 /Users/boger/testsyms/real/twoHundred
lrwxr-xr-x 1 501 20 11 Jan 8 06:00 /Users/boger/testsyms/syms/one -> l/one
lrwxr-xr-x 1 501 20 19 Jan 8 06:03 /Users/boger/testsyms/syms/three -> ../real/three
lrwxr-xr-x 1 501 20 18 Jan 8 06:01 /Users/boger/testsyms/syms/twoHundred -> l/twoHundred
/mnt/hgfs/Users/ :
docker#default:/mnt/hgfs$ ls -l /mnt/hgfs/Users/boger/testsyms/**/*
-rw-r--r-- 1 501 20 0 Jan 8 06:00 /mnt/hgfs/Users/boger/testsyms/real/one
-rw-r--r-- 1 501 20 0 Jan 8 06:03 /mnt/hgfs/Users/boger/testsyms/real/three
-rw-r--r-- 1 501 20 0 Jan 8 06:00 /mnt/hgfs/Users/boger/testsyms/real/twoHundred
lrwxr-xr-x 1 501 20 11 Jan 8 06:00 /mnt/hgfs/Users/boger/testsyms/syms/one -> ../real/one
lrwxr-xr-x 1 501 20 19 Jan 8 06:03 /mnt/hgfs/Users/boger/testsyms/syms/three -> /Users../real/three
lrwxr-xr-x 1 501 20 18 Jan 8 06:01 /mnt/hgfs/Users/boger/testsyms/syms/twoHundred -> ../real/twoHundred
its worth noting they have the same pattern as what I showed about inside and outside the container. Below is my config for the vm for the shared folders
sharedFolder0.present = "true"
sharedFolder0.enabled = "true"
sharedFolder0.readAccess = "true"
sharedFolder0.writeAccess = "true"
sharedFolder0.hostPath = "/Users"
sharedFolder0.guestName = "Users"
sharedFolder0.expiration = "never"
sharedFolder0.followSymlinks = “TRUE”
sharedFolder.maxNum = "1"
To work around this it turns out I just need to mount a different folder. I tried starting docker with -v /mnt/hgfs/Users/... and it works without any issues. I would really like to know what I can do to set up my vm so this isn't a problem down the road for other developers on my team though. Is my best option really to just ignore the broken directory and mount a new one ?
Related
Our deployment model is that we create containers on the fly with docker-java-api, some of these containers use heavily rocksdb database. The files of the DB are on the host, like:
ls -lrt /mnt/data/rocksdb
-rw-r--r-- 1 root root 8374 Nov 28 15:32 fileA
-rw-r--r-- 1 root root 0 Nov 28 15:32 fileB
-rw-r--r-- 1 root root 37 Nov 28 15:32 ....
-rw-r--r-- 1 root root 16 Nov 28 15:32 fileC
-rw-r--r-- 1 root root 19646 Nov 28 15:32 ..
-rw-r--r-- 1 root root 22500 Nov 28 15:32 .... etc
/mnt/data/rocksdb gets mounted onto a container with app that uses the DB heavily.
What I notice is that after starting the container, ownership of the files changes to:
ls -lrnt /mnt/data/rocksdbdata/
total 84092
-rw-r--r-- 1 999 999 8374 Nov 28 15:32 fileA
-rw-r--r-- 1 999 999 0 Nov 28 15:32 LOCK
-rw-r--r-- 1 999 999 37 Nov 28 15:32 fileB
-rw-r--r-- 1 999 999 16 Nov 28 15:32 fileC
-rw-r--r-- 1 999 999 19646 Nov 28 15:32 ...
-rw-r--r-- 1 999 999 22500 Nov 28 15:32 .....etc
User with these UID:GID (999:999) is docker.
Can you tell me why is this happening?
It needs a long explanation. Long story short, docker is changing the mounted file permissions because of access with root permissions.
For more details, please look at this answer;
Docker changes owner of local files mounted as volume
I'm getting status code 500 on my Dockerized Flask server.
I bashed into the container to check the logs:
docker exec -ti container_name /bin/bash
in /var/log I found:
root#b80b0c02fd18:/var/log# ls -al
total 224
drwxr-xr-x 1 root root 4096 Oct 13 21:02 .
drwxr-xr-x 1 root root 4096 Oct 12 07:00 ..
-rw-r--r-- 1 root root 9052 Oct 20 20:50 alternatives.log
drwxr-xr-x 1 root root 4096 Oct 20 20:49 apt
-rw-rw---- 1 root utmp 0 Oct 12 07:00 btmp
-rw-r--r-- 1 root root 164661 Oct 20 20:50 dpkg.log
-rw-r--r-- 1 root root 3232 Oct 12 07:00 faillog
-rw-rw-r-- 1 root utmp 29492 Oct 12 07:00 lastlog
-rw-rw-r-- 1 root utmp 0 Oct 12 07:00 wtmp
I couldn't cat or nano the files faillog nor lastlog so I don't know if the files
are relevant.
Where do I find access log or error log for containerized Flask server ?
docker logs <your container_name> to see the logs of the container
You can also find the logs for the flask app at /var/log/daemon.log
EDIT: First post, I'm trying to get some formatting...
I want to mount a host directory into a container directory so I can get container-created files back into the host. I've investigated at least a dozen examples with no luck. As near as I can tell, the following should work.
C:\tmp>ls -al jmeter
total 0
drwxrwxrwx 1 0 0 0 May 22 19:25 .
drwxrwxrwx 1 0 0 0 May 22 19:36 ..
C:\tmp>docker run -v /tmp/jmeter:/tmp/jmeter -it ubuntu bash
root#62a046b1dd74:/# ls -al /tmp/jmeter
total 4
drwxr-xr-x 2 root root 40 May 23 02:00 .
drwxrwxrwt 1 root root 4096 May 23 02:00 ..
root#62a046b1dd74:/# touch /tmp/jmeter/bob.txt
root#62a046b1dd74:/# ls -al /tmp/jmeter
total 4
drwxr-xr-x 2 root root 60 May 23 02:01 .
drwxrwxrwt 1 root root 4096 May 23 02:00 ..
-rw-r--r-- 1 root root 0 May 23 02:01 bob.txt
root#62a046b1dd74:/# exit
exit
C:\tmp>ls -al jmeter</b>
total 0
drwxrwxrwx 1 0 0 0 May 22 19:25 .
drwxrwxrwx 1 0 0 0 May 22 19:36 ..
C:\tmp>
My expectation is that /tmp/jmeter/bob.txt would exist on localhost.
FWIW, localhost is Windows 10 here, but I have the same problem in a github action, which I believe is Linux.
I have a docker compose file that contains the below volume mapping.
volumes:
- /opt/cloudera/parcels/SPARK2/lib/spark2:/opt/cloudera/parcels/SPARK2/lib/spark2
The contents of this directory are:
rwxr-xr-x 13 root root 247 Nov 30 16:39 .
drwxr-xr-x 3 root root 20 Jan 9 2018 ..
drwxr-xr-x 2 root root 4096 Jan 9 2018 bin
drwxr-xr-x 2 root root 39 Jan 9 2018 cloudera
lrwxrwxrwx 1 root root 16 Jan 9 2018 conf -> /etc/spark2/conf ***
drwxr-xr-x 5 root root 50 Jan 9 2018 data
drwxr-xr-x 4 root root 29 Jan 9 2018 examples
drwxr-xr-x 2 root root 8192 May 22 2018 jars
drwxr-xr-x 2 root root 204 Jan 9 2018 kafka-0.10
drwxr-xr-x 2 root root 201 Jan 9 2018 kafka-0.9
-rw-r--r-- 1 root root 17881 Jan 9 2018 LICENSE
drwxr-xr-x 2 root root 4096 Jan 9 2018 licenses
-rw-r--r-- 1 root root 24645 Jan 9 2018 NOTICE
drwxr-xr-x 6 root root 204 Jan 9 2018 python
-rw-r--r-- 1 root root 3809 Jan 9 2018 README.md
-rw-r--r-- 1 root root 313 Jan 9 2018 RELEASE
drwxr-xr-x 2 root root 4096 Jan 9 2018 sbin
lrwxrwxrwx 1 root root 20 Jan 9 2018 work -> /var/run/spark2/work
drwxr-xr-x 2 root root 52 Jan 9 2018 yarn
Of note is the starred conf directory, which itself is a series of symbolic links which eventually point to to the /etc/spark2/conf.cloudera.spark2_on_yarn folder that contains:
drwxr-xr-x 3 root root 194 Nov 30 16:39 .
drwxr-xr-x 3 root root 54 Nov 12 14:45 ..
-rw-r--r-- 1 root root 13105 Sep 16 03:07 classpath.txt
-rw-r--r-- 1 root root 20 Sep 16 03:07 __cloudera_generation__
-rw-r--r-- 1 root root 148 Sep 16 03:07 __cloudera_metadata__
-rw-r--r-- 1 ember 10000 2060 Nov 30 16:33 envars.test
-rw-r--r-- 1 root root 951 Sep 16 03:07 log4j.properties
-rw-r--r-- 1 root root 1837 Sep 16 03:07 spark-defaults.conf
-rw-r--r-- 1 root root 2331 Sep 16 03:07 spark-env.sh
drwxr-xr-x 2 root root 242 Sep 16 03:07 yarn-conf
When mapping the spark2 directory, only the yarn-conf subfolder shows up, the spark-env.sh file and other files are absent.
Is it the series of symbolic links that is causing these files to be absent? If so, do I need to explicitly set a mapping for every single folder in order to get all of the necessary dependencies to appear? I was under the impression that docker-compose volumes would recursively mount all files/folders under a particular directory.
The bind mount should faithfully reproduce the contents of the host: conf inside the container should be a symbolic link to /etc/spark2/conf. The container may or may not have anything at that path, but Docker doesn't recursively search the bind-mounted tree and try to do anything special with symlinks.
Are you trying to use docker run -v to "install" a Spark distribution in your container? You might be better off building a standalone Docker image with the software you want, and then using a bind mount to only inject the config files. That could look something like
docker run \
-v /etc/spark2/conf:/spark/conf \
-v $PWD/spark:/spark/work \
mysparkimage
Possible duplication of this question. In short, symlinks don't work very well inside docker containers.
I am setting up an internal Jupyterhub on a multi GPU server. Jupyter access is provided through a docker instance. I'd like to limit access for each user to no more than a single GPU. I'd appreciate any suggestion or comment. Thanks.
You can try it with nvidia-docker-compose
version: "2"
services
process1:
image: nvidia/cuda
devices:
- /dev/nvidia0
The problem can be solved in this way, just add the environment variable “NV_GPU” before “nvidia-docker” as follow:
[root#bogon ~]# NV_GPU='4,5' nvidia-docker run -dit --name tf_07 tensorflow/tensorflow:latest-gpu /bin/bash
e04645c2d7ea658089435d64e72603f69859a3e7b6af64af005fb852473d6b56
[root#bogon ~]# docker attach tf_07
root#e04645c2d7ea:/notebooks#
root#e04645c2d7ea:/notebooks# ll /dev
total 4
drwxr-xr-x 5 root root 460 Dec 29 03:52 ./
drwxr-xr-x 22 root root 4096 Dec 29 03:52 ../
crw--w---- 1 root tty 136, 0 Dec 29 03:53 console
lrwxrwxrwx 1 root root 11 Dec 29 03:52 core -> /proc/kcore
lrwxrwxrwx 1 root root 13 Dec 29 03:52 fd -> /proc/self/fd/
crw-rw-rw- 1 root root 1, 7 Dec 29 03:52 full
drwxrwxrwt 2 root root 40 Dec 29 03:52 mqueue/
crw-rw-rw- 1 root root 1, 3 Dec 29 03:52 null
crw-rw-rw- 1 root root 245, 0 Dec 29 03:52 nvidia-uvm
crw-rw-rw- 1 root root 245, 1 Dec 29 03:52 nvidia-uvm-tools
crw-rw-rw- 1 root root 195, 4 Dec 29 03:52 nvidia4
crw-rw-rw- 1 root root 195, 5 Dec 29 03:52 nvidia5
crw-rw-rw- 1 root root 195, 255 Dec 29 03:52 nvidiactl
lrwxrwxrwx 1 root root 8 Dec 29 03:52 ptmx -> pts/ptmx
drwxr-xr-x 2 root root 0 Dec 29 03:52 pts/
crw-rw-rw- 1 root root 1, 8 Dec 29 03:52 random
drwxrwxrwt 2 root root 40 Dec 29 03:52 shm/
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stdout -> /proc/self/fd/1
crw-rw-rw- 1 root root 5, 0 Dec 29 03:52 tty
crw-rw-rw- 1 root root 1, 9 Dec 29 03:52 urandom
crw-rw-rw- 1 root root 1, 5 Dec 29 03:52 zero
root#e04645c2d7ea:/notebooks#
or,read nvidia-docker of github's wiki
There are 3 options.
Docker with NVIDIA RUNTIME (version 2.0.x)
According to official documentation
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=2,3
nvidia-docker (version 1.0.x)
based on a popular post
nvidia-docker run .... -e CUDA_VISIBLE_DEVICES=0,1,2
(it works with tensorflow)
programmatically
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2"