This page describes the various folders and symlinks that Bazel generates.
I am interested in this one:
external/ <== The directory that remote repositories are
downloaded/symlinked into.
I would also like to get two things:
The absolute path to this folder
The path of the symlink used to access this folder from the workspace root.
I think that (2) follows this pattern: bazel-my-project/external
How can I get these paths from the Bazel CLI?
Use $(bazel info output_base)/external. $(bazel info execution_root)/external may not always work, since the execution root can be ephemeral as a symlink tree. The latter just contains symlinks to the former:
$ ls -al $(bazelisk info output_base)/execroot/__main__/external
total 44
drwxr-x--- 2 jingwen primarygroup 4096 Jul 6 12:51 .
drwxr-x--- 5 jingwen primarygroup 4096 Jul 6 12:51 ..
lrwxrwxrwx 1 jingwen primarygroup 112 Jul 6 12:51 bazel_tools -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/bazel_tools
lrwxrwxrwx 1 jingwen primarygroup 116 Jul 6 12:51 local_config_cc -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/local_config_cc
lrwxrwxrwx 1 jingwen primarygroup 110 Jul 6 12:51 local_jdk -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/local_jdk
lrwxrwxrwx 1 jingwen primarygroup 106 Jul 6 12:51 maven -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/maven
lrwxrwxrwx 1 jingwen primarygroup 110 Jul 6 12:51 platforms -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/platforms
lrwxrwxrwx 1 jingwen primarygroup 122 Jul 6 12:51 remote_coverage_tools -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/remote_coverage_tools
lrwxrwxrwx 1 jingwen primarygroup 118 Jul 6 12:51 remote_java_tools -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/remote_java_tools
lrwxrwxrwx 1 jingwen primarygroup 124 Jul 6 12:51 remote_java_tools_linux -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/remote_java_tools_linux
lrwxrwxrwx 1 jingwen primarygroup 118 Jul 6 12:51 remotejdk11_linux -> /home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external/remotejdk11_linux
If you just want to rely on the workspace symlinks, you can use readlink on the bazel-out symlink:
$ realpath $(readlink -f bazel-out)/../../../external
/home/jingwen/.cache/bazel/_bazel_jingwen/e375750af2fc236b3604a3cb5bfe4b91/external
Related
I am running nginxinc/nginx-unprivileged:stable-alpine docker image on RHEL 8.8 server. when docker container starts its creating directory and file with umask 0027.
But my docker 20.10.17 daemon running with Umask of 0022. my server default umask is 0027 this I can't change due to security requirements.
# systemd-analyze dump |egrep -i 'docker|umask'
ReferencedBy: docker.service (destination-file)
UMask: 0022
Here is inside container file system permission on RHEL 8 server.
# ls -l
total 76
drwxr-x--- 1 root root 4096 Jun 16 21:57 app
drwxr-x--- 1 root root 4096 Jun 16 21:57 bin
drwxr-x--- 5 root root 360 Jun 17 20:18 dev
drwxr-x--- 1 root root 4096 Jun 16 21:57 docker-entrypoint.d
-rwxr-x--- 1 root root 1202 Jun 16 21:57 docker-entrypoint.sh
drwxr-x--- 1 root root 4096 Jun 17 20:18 etc
drwxr-x--- 2 root root 4096 Jun 16 21:57 home
drwxrwxrwt 1 root root 4096 Jun 16 21:57 tmp
drwxr-x--- 1 root root 4096 Jun 16 21:57 usr
drwxr-x--- 1 root root 4096 Jun 16 21:57 var
Here is inside container file system permission on windows machine with same docker iamge.
ls -l
drwxr-xr-x 2 root root 4096 May 23 16:51 bin
drwxr-xr-x 5 root root 360 Jun 17 18:39 dev
drwxr-xr-x 1 root root 4096 Jun 16 10:36 docker-entrypoint.d
-rwxr-xr-x 1 root root 1202 Jun 16 10:36 docker-entrypoint.sh
drwxr-xr-x 1 root root 4096 Jun 17 18:39 etc
drwxr-xr-x 2 root root 4096 May 23 16:51 home
drwxr-xr-x 1 root root 4096 May 23 16:51 usr
drwxr-xr-x 1 root root 4096 May 23 16:51 var
How can I make docker container file system created with umask of 0022?
Thanks
when docker container starts
That means you need to build your own image, based on nginxinc/nginx-unprivileged:stable-alpine, with a new entry point like:
#!/bin/sh
# entrypoint.sh
umask 022
# ... other first-time setup ...
exec "$#"
See "Change umask in docker containers" for more details, but the idea remains the same.
My computer is running Windows 10 and Docker for Windows
I am using docker volume to code faster
the code is on the windows side. In the docker-compose.yaml I mount my workspace on the host tho the root of the apache server on the guest
volumes:
- ./:/var/www/html
When I create the file web\modules\custom\hello_world\hello_world.info.yml on the host that file replicates on the guest side but remains empty:
PS C:\Users\jeanp\CONSULTANT\dockertest> docker exec -it my_drupal9_project_nginx /bin/bash
/var/www/html$ ls -al web/modules/custom/hello_world/
total 8
drwxr-xr-x 2 root root 4096 Aug 3 16:46 .
drwxr-xr-x 3 root root 4096 Aug 3 16:46 ..
-rwxr-xr-x 1 root root 0 Aug 3 16:46 hello_world.info.yml
(hello_world.info.yml has 0 bytes length and can be edited only by root)
On the guest side, when I try to edit hello_world.info.yml I cannot save it because it is marked as read-only
web/modules/custom/hello_world/hello_world.info.yml [Readonly] 0/0 100%
On the guest side I asked who am I the answer is wodby:
/var/www/html$ whoami
wodby
the other files on in the image on the guest belongs to wodby
/var/www/html$ ls -al web
total 84
drwxr-xr-x 7 wodby wodby 4096 Jul 22 02:17 .
drwxr-xr-x 4 wodby wodby 4096 Jul 31 12:04 ..
-rw-r--r-- 1 wodby wodby 1025 Jul 22 02:17 .csslintrc
-rw-r--r-- 1 wodby wodby 151 Jul 22 02:17 .eslintignore
-rw-r--r-- 1 wodby wodby 41 Jul 22 02:17 .eslintrc.json
-rw-r--r-- 1 wodby wodby 2314 Jul 22 02:17 .ht.router.php
-rw-r--r-- 1 wodby wodby 7572 Jul 22 02:17 .htaccess
-rw-r--r-- 1 wodby wodby 94 Jul 22 02:17 INSTALL.txt
-rw-r--r-- 1 wodby wodby 3205 Jul 22 02:17 README.md
-rw-r--r-- 1 wodby wodby 315 Jul 22 02:17 autoload.php
drwxr-xr-x 12 wodby wodby 4096 Jul 20 21:42 core
-rw-r--r-- 1 wodby wodby 1507 Jul 22 02:17 example.gitignore
-rw-r--r-- 1 wodby wodby 549 Jul 22 02:17 index.php
drwxr-xr-x 3 wodby wodby 4096 Aug 3 16:46 modules
drwxr-xr-x 2 wodby wodby 4096 Jul 22 02:17 profiles
-rw-r--r-- 1 wodby wodby 1586 Jul 22 02:17 robots.txt
drwxr-xr-x 3 wodby wodby 4096 Jul 22 02:17 sites
drwxr-xr-x 2 wodby wodby 4096 Jul 22 02:17 themes
-rw-r--r-- 1 wodby wodby 804 Jul 22 02:17 update.php
-rw-r--r-- 1 wodby wodby 4016 Jul 22 02:17 web.config
My question is the following:
how to make that on the guest the files created through synchronisation from the host belongs to wodby and not root ?
Sounds like you could benefit from setting up the folder and permissions via the Dockerfile prior to mounting the files in:
https://github.com/moby/moby/issues/2259#issuecomment-48286811
Else this issue may be able to help you out, detailing a volumes-from pattern
What is the (best) way to manage permissions for Docker shared volumes?
I have a docker compose file that contains the below volume mapping.
volumes:
- /opt/cloudera/parcels/SPARK2/lib/spark2:/opt/cloudera/parcels/SPARK2/lib/spark2
The contents of this directory are:
rwxr-xr-x 13 root root 247 Nov 30 16:39 .
drwxr-xr-x 3 root root 20 Jan 9 2018 ..
drwxr-xr-x 2 root root 4096 Jan 9 2018 bin
drwxr-xr-x 2 root root 39 Jan 9 2018 cloudera
lrwxrwxrwx 1 root root 16 Jan 9 2018 conf -> /etc/spark2/conf ***
drwxr-xr-x 5 root root 50 Jan 9 2018 data
drwxr-xr-x 4 root root 29 Jan 9 2018 examples
drwxr-xr-x 2 root root 8192 May 22 2018 jars
drwxr-xr-x 2 root root 204 Jan 9 2018 kafka-0.10
drwxr-xr-x 2 root root 201 Jan 9 2018 kafka-0.9
-rw-r--r-- 1 root root 17881 Jan 9 2018 LICENSE
drwxr-xr-x 2 root root 4096 Jan 9 2018 licenses
-rw-r--r-- 1 root root 24645 Jan 9 2018 NOTICE
drwxr-xr-x 6 root root 204 Jan 9 2018 python
-rw-r--r-- 1 root root 3809 Jan 9 2018 README.md
-rw-r--r-- 1 root root 313 Jan 9 2018 RELEASE
drwxr-xr-x 2 root root 4096 Jan 9 2018 sbin
lrwxrwxrwx 1 root root 20 Jan 9 2018 work -> /var/run/spark2/work
drwxr-xr-x 2 root root 52 Jan 9 2018 yarn
Of note is the starred conf directory, which itself is a series of symbolic links which eventually point to to the /etc/spark2/conf.cloudera.spark2_on_yarn folder that contains:
drwxr-xr-x 3 root root 194 Nov 30 16:39 .
drwxr-xr-x 3 root root 54 Nov 12 14:45 ..
-rw-r--r-- 1 root root 13105 Sep 16 03:07 classpath.txt
-rw-r--r-- 1 root root 20 Sep 16 03:07 __cloudera_generation__
-rw-r--r-- 1 root root 148 Sep 16 03:07 __cloudera_metadata__
-rw-r--r-- 1 ember 10000 2060 Nov 30 16:33 envars.test
-rw-r--r-- 1 root root 951 Sep 16 03:07 log4j.properties
-rw-r--r-- 1 root root 1837 Sep 16 03:07 spark-defaults.conf
-rw-r--r-- 1 root root 2331 Sep 16 03:07 spark-env.sh
drwxr-xr-x 2 root root 242 Sep 16 03:07 yarn-conf
When mapping the spark2 directory, only the yarn-conf subfolder shows up, the spark-env.sh file and other files are absent.
Is it the series of symbolic links that is causing these files to be absent? If so, do I need to explicitly set a mapping for every single folder in order to get all of the necessary dependencies to appear? I was under the impression that docker-compose volumes would recursively mount all files/folders under a particular directory.
The bind mount should faithfully reproduce the contents of the host: conf inside the container should be a symbolic link to /etc/spark2/conf. The container may or may not have anything at that path, but Docker doesn't recursively search the bind-mounted tree and try to do anything special with symlinks.
Are you trying to use docker run -v to "install" a Spark distribution in your container? You might be better off building a standalone Docker image with the software you want, and then using a bind mount to only inject the config files. That could look something like
docker run \
-v /etc/spark2/conf:/spark/conf \
-v $PWD/spark:/spark/work \
mysparkimage
Possible duplication of this question. In short, symlinks don't work very well inside docker containers.
I am setting up an internal Jupyterhub on a multi GPU server. Jupyter access is provided through a docker instance. I'd like to limit access for each user to no more than a single GPU. I'd appreciate any suggestion or comment. Thanks.
You can try it with nvidia-docker-compose
version: "2"
services
process1:
image: nvidia/cuda
devices:
- /dev/nvidia0
The problem can be solved in this way, just add the environment variable “NV_GPU” before “nvidia-docker” as follow:
[root#bogon ~]# NV_GPU='4,5' nvidia-docker run -dit --name tf_07 tensorflow/tensorflow:latest-gpu /bin/bash
e04645c2d7ea658089435d64e72603f69859a3e7b6af64af005fb852473d6b56
[root#bogon ~]# docker attach tf_07
root#e04645c2d7ea:/notebooks#
root#e04645c2d7ea:/notebooks# ll /dev
total 4
drwxr-xr-x 5 root root 460 Dec 29 03:52 ./
drwxr-xr-x 22 root root 4096 Dec 29 03:52 ../
crw--w---- 1 root tty 136, 0 Dec 29 03:53 console
lrwxrwxrwx 1 root root 11 Dec 29 03:52 core -> /proc/kcore
lrwxrwxrwx 1 root root 13 Dec 29 03:52 fd -> /proc/self/fd/
crw-rw-rw- 1 root root 1, 7 Dec 29 03:52 full
drwxrwxrwt 2 root root 40 Dec 29 03:52 mqueue/
crw-rw-rw- 1 root root 1, 3 Dec 29 03:52 null
crw-rw-rw- 1 root root 245, 0 Dec 29 03:52 nvidia-uvm
crw-rw-rw- 1 root root 245, 1 Dec 29 03:52 nvidia-uvm-tools
crw-rw-rw- 1 root root 195, 4 Dec 29 03:52 nvidia4
crw-rw-rw- 1 root root 195, 5 Dec 29 03:52 nvidia5
crw-rw-rw- 1 root root 195, 255 Dec 29 03:52 nvidiactl
lrwxrwxrwx 1 root root 8 Dec 29 03:52 ptmx -> pts/ptmx
drwxr-xr-x 2 root root 0 Dec 29 03:52 pts/
crw-rw-rw- 1 root root 1, 8 Dec 29 03:52 random
drwxrwxrwt 2 root root 40 Dec 29 03:52 shm/
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stdout -> /proc/self/fd/1
crw-rw-rw- 1 root root 5, 0 Dec 29 03:52 tty
crw-rw-rw- 1 root root 1, 9 Dec 29 03:52 urandom
crw-rw-rw- 1 root root 1, 5 Dec 29 03:52 zero
root#e04645c2d7ea:/notebooks#
or,read nvidia-docker of github's wiki
There are 3 options.
Docker with NVIDIA RUNTIME (version 2.0.x)
According to official documentation
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=2,3
nvidia-docker (version 1.0.x)
based on a popular post
nvidia-docker run .... -e CUDA_VISIBLE_DEVICES=0,1,2
(it works with tensorflow)
programmatically
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2"
I have a simple example set up, running a centos or ubuntu image I've discovered that all my symlinks inside of a mounted volume are broken.
given the directory structure
testsyms
real
--one
--twoHundred
--three
syms
--one
--twoHundred
--three
and using the following docker command to start my container
docker run -ti -v $HOME/testsyms/:$HOME/testsyms -w $HOME/testsyms
I then do the following
inside the container :
[root#96b9af1cd545 testsyms]# ls -l **/*
-rw-r--r-- 1 501 games 0 Jan 8 06:00 real/one
-rw-r--r-- 1 501 games 0 Jan 8 06:03 real/three
-rw-r--r-- 1 501 games 0 Jan 8 06:00 real/twoHundred
lrwxr-xr-x 1 501 games 11 Jan 8 06:00 syms/one -> l/one
lrwxr-xr-x 1 501 games 19 Jan 8 06:03 syms/three -> ../real/three
lrwxr-xr-x 1 501 games 18 Jan 8 06:01 syms/twoHundred -> l/twoHundred
outside the container :
tam#tam-osx:testsyms$ ls -l **/*
-rw-r--r-- 1 tam staff 0 Jan 7 23:00 real/one
-rw-r--r-- 1 tam staff 0 Jan 7 23:03 real/three
-rw-r--r-- 1 tam staff 0 Jan 7 23:00 real/twoHundred
lrwxr-xr-x 1 tam staff 11 Jan 7 23:00 syms/one -> ../real/one
lrwxr-xr-x 1 tam staff 19 Jan 7 23:03 syms/three -> /Users../real/three
lrwxr-xr-x 1 tam staff 18 Jan 7 23:01 syms/twoHundred -> ../real/twoHundred
I created the links one and twoHundred outside the container while I created link three inside the container. inside the container links one and TwoHundred are broken. outside the container link three is broken as you should be able to see from the above outputs.
UPDATE--
Base on the comments I tried to ssh into the docker machine and found that the links are both correct, and incorrect. Doing some digging I find that I have my shared folder Users in 2 places. I have a /Users directory and I have a /mnt/hgfs/Users directory. Here is the output of each directory
/Users/ :
docker#default:/mnt/hgfs$ ls -l /Users/boger/testsyms/**/*
-rw-r--r-- 1 501 20 0 Jan 8 06:00 /Users/boger/testsyms/real/one
-rw-r--r-- 1 501 20 0 Jan 8 06:03 /Users/boger/testsyms/real/three
-rw-r--r-- 1 501 20 0 Jan 8 06:00 /Users/boger/testsyms/real/twoHundred
lrwxr-xr-x 1 501 20 11 Jan 8 06:00 /Users/boger/testsyms/syms/one -> l/one
lrwxr-xr-x 1 501 20 19 Jan 8 06:03 /Users/boger/testsyms/syms/three -> ../real/three
lrwxr-xr-x 1 501 20 18 Jan 8 06:01 /Users/boger/testsyms/syms/twoHundred -> l/twoHundred
/mnt/hgfs/Users/ :
docker#default:/mnt/hgfs$ ls -l /mnt/hgfs/Users/boger/testsyms/**/*
-rw-r--r-- 1 501 20 0 Jan 8 06:00 /mnt/hgfs/Users/boger/testsyms/real/one
-rw-r--r-- 1 501 20 0 Jan 8 06:03 /mnt/hgfs/Users/boger/testsyms/real/three
-rw-r--r-- 1 501 20 0 Jan 8 06:00 /mnt/hgfs/Users/boger/testsyms/real/twoHundred
lrwxr-xr-x 1 501 20 11 Jan 8 06:00 /mnt/hgfs/Users/boger/testsyms/syms/one -> ../real/one
lrwxr-xr-x 1 501 20 19 Jan 8 06:03 /mnt/hgfs/Users/boger/testsyms/syms/three -> /Users../real/three
lrwxr-xr-x 1 501 20 18 Jan 8 06:01 /mnt/hgfs/Users/boger/testsyms/syms/twoHundred -> ../real/twoHundred
its worth noting they have the same pattern as what I showed about inside and outside the container. Below is my config for the vm for the shared folders
sharedFolder0.present = "true"
sharedFolder0.enabled = "true"
sharedFolder0.readAccess = "true"
sharedFolder0.writeAccess = "true"
sharedFolder0.hostPath = "/Users"
sharedFolder0.guestName = "Users"
sharedFolder0.expiration = "never"
sharedFolder0.followSymlinks = “TRUE”
sharedFolder.maxNum = "1"
To work around this it turns out I just need to mount a different folder. I tried starting docker with -v /mnt/hgfs/Users/... and it works without any issues. I would really like to know what I can do to set up my vm so this isn't a problem down the road for other developers on my team though. Is my best option really to just ignore the broken directory and mount a new one ?