I want to hide all irrelevant source files from build actions because there are some tools that explore host file system, for example, node searches node_modules directory from working directory to root /. But linux-sandbox doesn't seem to hide host files outside the sandboxes:
genrule(
name = "foo",
outs = ["x"],
cmd = "ls ~ | tee $#",
)
Outputs:
<my home files>
Target //:foo up-to-date:
bazel-bin/x
INFO: Elapsed time: 0.088s, Critical Path: 0.01s
INFO: 2 processes: 1 internal, 1 linux-sandbox.
According to the official doc, linux-sandbox makes host files read-only but doesn't hide them.
Is there any way to hide host files?
One can make a host path inaccessible in the linux sandbox with --sandbox_block_path.
It's also possible to remove all host directories from the sandbox except ones explicitly added with --sandbox_add_mount_pair by employing the --experimental_use_hermetic_linux_sandbox flag.
Related
This is my jenkinx/jenkins_home/workspace folder looks like. (while doing ls -la)
drwxrwxrwx 24 nfsnobody nfsnobody 4096 Sep 29 18:26 workspace
There is a folder inside this workspace. This folder was created by Jenkins automatically when i build a job. This job name is Sandbox_Test-Job
Here's the folder
drwxr-xr-x 2 nfsnobody nfsnobody 4096 Sep 29 18:26 Sandbox_Test-Job
As you can see host machine's user does not have write permission to this folder and the script in the host machine is unable to write to Sandbox_Test-Vinod_M
I will have to manually set the permission first for this folder before the script can write. How can we make sure that when Jenkins create this job folder for each job, the folder has to have write permission for the user in the host?
First, you want to run ls -n to find the real UID/GID of the files/dirs instead of the display names. Next, check to see if that user appears in your /etc/passwd
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
You would need to find the displayed UID in ls -n, not nfsnobody - 65534.
It's unlikely "nfsbody is the owner of the files ( RHEL NFS reference , Linux Home Server HOWTO - Fedora ), more likely the files were written to an nfs volume shared across systems where the UID for jenkins ( run id jenkins) is not the same across them.
Align the UIDs (non-trivial as you must fix the passwd plus the existing ownership UID) and things will then be OK.
Suggested reading from SUSE and ServerFault.
If you're lucky - all the files have one UID and you just need to sync the UID in the passwds, or maybe blow them all away if it's just the workspaces.
ps: Not really a Jenkins issue, better guidance to be found on Serverfault or SuperUser.
pps: there is some help on S/O worth reading as well (search "nfsnobody'):
nfsnobody User Privileges, chown: invalid user: ‘nfsnobody’ in fedora 32 after install nfs
I have a VPN client in my Docker container (ubuntu:18.04).
The client must do the following:
mv /etc/resolv.conf /etc/resolv.conf.orig
Then the client should create new /etc/resolv.conf with their DNS servers. However, the move fails with an error:
mv: cannot move '/etc/resolv.conf' to '/etc/resolv.conf.orig': Device or resource busy
Can this be fixed? Thank you advance.
P.S.: I can 't change the VPN client code.
Within the Docker container the /etc/resolv.conf file is not an ordinary regular file. Docker manages it in a special manner: the container engine writes container-specific configuration into the file outside of the container and bind-mounts it to /etc/resolv.conf inside the container.
When your VPN client runs mv /etc/resolv.conf /etc/resolv.conf.orig, things boil down to the rename(2) syscall (or similar call from this family), and, according to the manpage for this syscall, EBUSY (Device or resource busy) error could be returned by few reasons, including the situation when the original file is a mountpoint:
EBUSY
The rename fails because oldpath or newpath is a directory that is in use by some process (perhaps as current working directory, or as root directory, or
because it was open for reading) or is in use by the system (for example as mount point), while the system considers this an error. (Note that there is no
requirement to return EBUSY in such cases — there is nothing wrong with doing the rename anyway — but it is allowed to return EBUSY if the system cannot otherwise handle such situations.)
Though there is a remark that the error is not guaranteed to be produced in such circumstances, it seems that it always fires for bind-mount targets (I guess that probably this happens here):
$ touch sourcefile destfile
$ sudo mount --bind sourcefile destfile
$ mv destfile anotherfile
mv: cannot move 'destfile' to 'anotherfile': Device or resource busy
So, similarly, you cannot move /etc/resolv.conf inside the container, for it is a bind-mount, and there is no straight solution.
Given that the bind-mount of /etc/resolv.conf is a read-write mount, not a read-only one, it is still possible to overwrite this file:
$ mount | grep resolv.conf
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime)
So, the possible fix could be to try copying this file to the .orig backup and then rewriting the original one instead of renaming the original file and then re-creating it.
Unfortunately, this does not meet your restrictions (I can 't change the VPN client code.), so I bet that you are out of luck here.
Any method that requires moving a file onto /etc/resolv.conf fails in docker container.
The workaround is to rewrite the original file instead of moving or renaming a modified version onto it.
For example, use the following at a bash prompt:
(rc=$(sed 's/^\(nameserver 192\.168\.\)/# \1/' /etc/resolv.conf)
echo "$rc" > /etc/resolv.conf)
This works by rewriting /etc/resolv.conf as follows:
read and modify the current contents of /etc/resov.conf through the stream editor, sed
the sed script in this example is for commenting out lines starting with nameserver 192.168.
save the updated contents in a variable, rc
overwrite the original file /etc/resolv.conf with updated contents in "$rc"
The command list is in parentheses to operate in a sub-shell to avoid polluting the current shell's name space with a variable name rc, just in case it happens to be in use.
Note that this command does not require sudo since it is taking advantage of the super user privileges available by default inside the container.
Note that sed -i (editing in-place) involves moving the updated file onto the original and will not work.
But if the visual editor, vi, is available in the container, editing and saving /etc/resolv.conf with vi works, since vi modifies the original file directly.
I am sharing a docker-compose file with a team member to easily build our app. We're both on OSX and it works fine from my machine, but my colleague is getting the following error:
ERROR: for backend Cannot start service backend: b'Mounts denied: \r\nThe path /usr/bin/docker\r\nis not shared from OS X and is not known to Docker.\r\nYou can configure shared paths from Docker -> Preferences... -> File Sharing.\r\nSee https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.\r\n.'
I assume it is due to the following statement in the docker-compose.yaml.
volumes:
- "/usr/bin/docker:/usr/bin/docker"
I didn't have to alter my docker-> preferences-> file sharing to make this work. I only have the default dirs shared: /Users, /Volumes, /tmp, /private.
How come it isn't working on his machine? Does he have to add the /usr dir as a shared dir? If so, how come I don't have to?
UPDATE
The problem was that the docker executable was located in /usr/local/bin/ rather than /usr/bin. I have no idea why docker installed the executable differently despite both machines being OSX.
That's a misleading error, should have asked you to check if the path /usr/bin/docker exists on the host machine. Docker (at least on Mac) will attempt to create a directory if it doesn't exist. Apparently, your team mate isn't logged in with sufficient privileges to create /usr/bin/docker.
Using this docker image:
docker build -t batocera-docker https://github.com/batocera-linux/batocera.docker.git
I launch a container this way, so that the sources are available in F:\docker Windows folder for browsing.
docker run -it -v F:\docker:/build batocera-docker
The following commands start the build process:
git clone git://git.buildroot.net/buildroot
cd buildroot/
make pc_x86_64_bios_defconfig
make
Which fails when processing the "host-gmp" component:
>>> host-gmp 6.1.2 Building
The build fails with the following error (but from experiment, it seems it does not always fail on the same files).
m4: cannot open `invert_limb_table.asm': No such file or directory
This is a "strange" because, the following command shows the file exists where it should (and issuing the "cat" command shows a valid file content!).
root#fe9bc1b08539:/build/buildroot# ls -la
/build/buildroot/output/build/host-gmp-6.1.2/mpn/invert_limb_table.asm lrwxrwxrwx 1 root root 35 Feb 12 22:01 /build/buildroot/output/build/host-gmp-6.1.2/mpn/invert_limb_table.asm -> ../mpn/x86_64/invert_limb_table.asm
Sometimes, the error states "File handle stale".
However, such errors always occur on symbolic links files (symlinks or hardlinks?)
I'm confused because creating a symbolic link in the mounted folder seems to work (it works using the ln command), but then it fails at some point, as if the overlay file system of the container was not synchronizing its content with the mounted folder "fast enough"?
Would there be any work around?
(I could build in a container folder, but that is trivial and not much useful to me as the sources are not available from outside).
I am running Docker Toolbox v. 1.13.1a on Windows 7 Pro Service pack 1 x64OS.
with Virtual Box Version 5.1.14 r112924
when I try to run any docker image e.g. official postgres image from Docker Hub with volumes disabled, it works fine!
But when I enable the volumes it fails.
I tried all official documentations
The VM has shared folder as required and has full access to it also
shared folder screenshot
In case of my example of postgresql it crashes with following log
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... LOG: could not link file "pg_xlog/xlogtemp.27" to "pg_xlog/000000010000000000000001": Operation not permitted
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
I know its the problem with folder permissions. But kinda stuck!
A ton of thanks in advance
I've been busy with this problem all day and my conclusion that it's currently simply not possible to run postgresql inside a docker container while keeping your data persistent in a separate volume.
I even tried running the container without linking to a volume and copying the data that was originally in /var/lib/postgresql into a folder of my host OS (Windows 10 Home), then copy that into the folder that got then linked to the container itself.
Alas, I got the next error:
FATAL: data directory "/var/lib/postgresql/data/pgadmin" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
In conclusion: There's something going wrong with the ownership and the correct user owning it and to be able to fix it, you'll need a unix commandline on Windows that is able to run docker (something currently not possible with Bash on Ubuntu on Windows that is running using Ubuntu 16.04 binaries).
Maybe, in the future, you'll be able to run the needed commands (found here, under Arbitrary --user Notes), but these are *nix commands and powershell (started by Kitematic) can't run those. Bash for Ubuntu for Windows could run those, but that shell has no connection to the docker daemon/service on windows...
TL;DR: Lost a day of work: It is currently impossible on Windows.
I have been trying to fix this issue also ..
At first I thought it was a symlink problem (because the first error fails on " could not link .. operation not permitted)
To be sure symlink is permitted you have to :
share a folder in virtualbox
run virtualbox as administrator (if you account is in administrator group) Right click virtualbox.exe and select run as Administrator
if your account is not administrator, add the symlink privilege with secpol.msc > "Local Policies-User Rights Assignments" add your user to "Create symbolic links"
enable symlink for your shared folder in virtualbox :
VBoxManage setextradata VM_NAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/SHARED_FOLDER_NAME 1
Alternatively you can also use the c:\User\username folder which is shared and symlink enabled by default dockertools installation
Now I can create symlinks in the shared folder from the docker container .. but I still have the same error "could not link ... operation not permitted"
So the reason must be somewhere else ... in the file permissions as you said but I do not see why ?