NFS Server on OSX - squash options - docker

Is it possible to have no_root_squash option enabled on an OSX Yosemite NFS server?
I'm running boot2docker on OS X in Yosemite via Virtualbox (boot2docker 1.8 - from Docker Toolbox).
Due to performance issues with large mounted host-volumes, I thought about giving NFS a shot!
This is my /etc/exports
/Users/myuser -mapall=myuser:staff 192.168.99.100
This is my /etc/nfs.conf
nfs.server.mount.require_resv_port = 0
And that script is executed in boot2docker-vm
#!/bin/sh
sudo umount /Users
sudo mkdir -p /Users/myuser
sudo /usr/local/etc/init.d/nfs-client start
sudo mount 192.168.99.1:/Users/myuser /Users/myuser -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp,nfsvers=3
And inside a container I use a bash script that reads the UID and GID of a mounted volume and changes the UID and GID of a non-privileged user in order to allow read/write operations.
So far so good. But sometimes I would need to change file permissions, however sudo chmod/chown don't work because all operations are matched against my local (unprivileged) user on OSX. NFS on Linux is supporting the option no_root_squash which would perfectly solve my problems, but I'm afraid it's it's not available on OSX Yosemite.
And it's not possible to use -mapall along with -maproot.
And no, I don't want to map against root on OS X too :)
Any ideas?

Related

Mounting volumes between host (MacOs BigSur) and podman VM

In my company we switched to Podman due to docker latest change in policies. My colleagues who use Windows with WSL2 switched seamlessly.
Me, who uses MacOs BigSur v.11.6.2 face the following issue:
$ podman machine init -v /Users:/mnt/Users
$ podman machine start
I get the following error
$ Starting machine "podman-machine-default"
$ INFO[0000] waiting for clients...
$ INFO[0000] new connection from to /var/folders/4z/9v__6yld4d7fzmbxm8trl1sh0000gn/T/podman/qemu_podman-machine-default.sock
$ Waiting for VM ...
$ qemu-system-x86_64: -virtfs local,path=/Users/Dimitrii_Meritsidi/Documents/spbh_exus/git/cdp_airflow_local_environment,mount_tag=vol0,security_model=mapped-xattr: There is no option group 'virtfs'
$ qemu-system-x86_64: -virtfs local,path=/Users/Dimitrii_Meritsidi/Documents/spbh_exus/git/cdp_airflow_local_environment,mount_tag=vol0,security_model=mapped-xattr: virtfs support is disabled
I have read that MacOs Bigsur doesn't support virtfs. What are the possible solutions here? I have found probable workaround with Vmware Fusion, however it is also on paid subscription.
The reason I need to use this mounting is because we use docker-compose.yml with volumes for launching local airflow.
try
podman machine init --volume /Users --volume /Volumes
To allow volume mounts on MacOS, podman machine needs to be created with access to the folder from which you are going to attempt to mount sub-folders, so it would have access to it.
Is likely that most MacOS users would only want to mount from within their home directory, so machine should be created like below:
podman machine init --now --cpus=4 --memory=4096 -v $HOME:$HOME
I wrote a guide for podman on macos at https://github.com/ansible/vscode-ansible/wiki/macos which you might find useful.

Warning when trying run tensorflow with Docker on Windows

I cannot start tensorflow with image download from tensorflow
I used docker on windows 10 and for error ouput said this:
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u $(id -u):$(id -g) args...
I try search a problem for google... but cannot found, my experience with docker is null
This is a warning specifying that to access/change the files created in the mounted directory you may require sudo and you may not be able to change such files as a non sudo user, since your docker container used sudo permissions while creating them.
A quick search shows that there are many blog references available, check these -
Docker creates files as root in mounted volume
Running a Docker container as a non-root user
Setup Docker for windows using windows subsystem linux
https://jtreminio.com/blog/running-docker-containers-as-current-host-user/
https://medium.com/better-programming/running-a-container-with-a-non-root-user-e35830d1f42a
https://docs.docker.com/install/linux/linux-postinstall/

Vagrant mount error after installing Docker

I'm getting a strange error in my vagrant VM. So I created a new ubuntu/trusty64 VM using VirtualBox (on OS X if anyone cares).
All fine there...
Then I installed Docker as per the instructions which basically involves running
wget -qO- https://get.docker.com/ | sh
That works fine too.
Then I go to reboot the VM, I exit the ssh shell, and run vagrant reload and I get this error message.
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3` vagrant /vagrant
mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant` vagrant /vagrant
The error output from the last command was:
stdin: is not a tty
/sbin/mount.vboxsf: mounting failed with the error: No such device
Any thoughts?
I faced similar issues. Looks like Docker (and potentially other tools) when installed will update the kernel version in your Ubuntu/Trusty64 guest. Since the VBox GuestAdditions that came preinstalled in Ubuntu/Trusty64 were specifically built against the original kernel version, the Guest Additions will stop working in the next vagrant up or vagrant reload as that's when the new kernel installed by Docker kicks in. At that point, Vagrant is no longer able to auto-mount /vagrant folder (or any synced folder for the matter) as the Guest Additions were built against a different kernel.
To get them working again you'd have to rebuild the GuestAdditions against the new kernel version that Docker installed.
Luckily, there's a plugin in Vagrant called vagrant-vbguest that takes care of automatically rebuilding the Guest Additions when the plugin detects they need to be rebuilt (i.e. like when the kernel in the guest changes, or you upgraded your VirtualBox version in the host)
So in my case, the easy way to fix it was to:
On the host: $ vagrant plugin install vagrant-vbguest
On the guest: $ sudo apt-get install linux-headers-$(uname -r)
On the host: $ vagrant reload
Thanks to the vagrant-vbguest plugin, new VBox GuestAdditions will be automatically rebuilt against the new version of your kernel (for which you would have downloaded the required headers in the second step above).
Once the GuestAdditions are back in shape, synchronized folders should be working again and the mapping of /vagrant should be successful.
Give it a try.

Docker External File Access Not in /Users/ on OSX

So, despite Docker 1.3 now allowing easy access to external storage on OSX through boot2docker for files in /Users/, I still need to access files not in /Users/. I have a settings file in /etc/settings/ that I'd like my container to have access to. Also, the CMD in my container writes logs to /var/log in the container, which I'd rather have it write to /var/log on the host. I've been playing around with VOLUME and passing stuff in with -v at run, but I'm not getting anywhere. Googling hasn't been much help. Can someone who has this working provide help?
As boot2docker now includes VirtualBox Guest Additions, you can now share folders on the host computer (OSX) with guest operating systems (boot2docker-vm). /Users/ is automatically mounted but you can mount/share custom folders. In your host console (OSX) :
$ vboxmanage sharedfolder add "boot2docker-vm" --name settings-share --hostpath /etc/settings --automount
Start boot2docker and ssh into it ($boot2docker up / $boot2docker ssh).
Choose where you want to mount the "settings-share" (/etc/settings) in the boot2docker VM :
$ sudo mkdir /settings-share-on-guest
$ sudo mount -t vboxsf settings-share /settings-share-on-guest
According that /settings is the volume declared in the docker container add -v /settings-share-on-guest:/settings to the docker run command to mount the host directory settings-share-on-guest as a data volume.
Works on Windows, not tested on OSX but should work.

Not enough entropy to support /dev/random in docker containers running in boot2docker

Running out of entropy in virtualized Linux systems seems to be a common problem (e.g. /dev/random Extremely Slow?, Getting linux to buffer /dev/random). Despite of using a hardware random number generator (HRNG) the use of a an entropy gathering daemon like HAVEGED is often suggested. However an entropy gathering daemon (EGD) cannot be run inside a Docker container, it must be provided by the host.
Using an EGD works fine for docker hosts based on linux distributions like Ubuntu, RHEL, etc. Getting such a daemon to work inside boot2docker - which is based on Tiny Core Linux (TCL) - seems to be another story. Although TCL has a extension mechanism, an extension for an entropy gathering daemon doesn't seem to be available.
So an EGD seems like a proper solution for running docker containers in a (production) hosting environment, but how to solve it for development/testing in boot2docker?
Since running an EGD in boot2docker seemed too difficult, I thought about simply using /dev/urandom instead of /dev/random. Using /dev/urandom is a litte less secure, but still fine for most applications which are not generating long-term cryptographic keys. At least it should be fine for development/testing inside boot2docker.
I just realized, that it is simple as mounting /dev/urandom from the host as /dev/random into the container:
$ docker run -v /dev/urandom:/dev/random ...
The result is as expected:
$ docker run --rm -it -v /dev/urandom:/dev/random ubuntu dd if=/dev/random of=/dev/null bs=1 count=1024
1024+0 records in
1024+0 records out
1024 bytes (1.0 kB) copied, 0.00223239 s, 459 kB/s
At least I know how to build my own boot2docker images now ;-)
The most elegant solution I've found is running Haveged in separate container:
docker pull harbur/haveged
docker run --privileged -d harbur/haveged
Check whether enough entropy available:
$ cat /proc/sys/kernel/random/entropy_avail
2066
Another option is to install the rng-tools package and map it to use the /dev/urandom
yum install rng-tools
rngd -r /dev/urandom
With this I didn't need to map any volume in the docker container.
Since I didn't like to modify my Docker containers for development/testing I tried to modify the boot2docker image. Luckily, the boot2docker image is build with Docker and can be easily extended. So I've set up my own Docker build boot2docker-urandom. It extends the standard boot2docker image with a udev rule found here.
Building your own boot2docker.iso image is simple as
$ docker run --rm mbonato/boot2docker-urandom > boot2docker.iso
To replace the standard boot2docker.iso that comes with boot2docker you need to:
$ boot2docker stop
$ boot2docker delete
$ mv boot2docker.iso ~/.boot2docker/
$ boot2docker init
$ boot2docker up
Limitations, from inside a Docker container /dev/random still blocks. Most likely, because the Docker containers do not use /dev/random of the host directly, but use the corresponding kernel device - which still blocks.
Alpine Linux may be a better choice for a lightweight docker host. Alpine LXC & docker images are only 5mb (versus 27mb for boot2docker)
I use haveged on Alpine for LXC guests & on Debian for docker guests. It gives enough entropy to generate gpg / ssh keys & openssl certificates in containers. Alpine now has an official docker repo.
Alternatively build a haveged package for Tiny Core - there is a package build system available.
if you have this problem in a docker container created from a self-built image that runs a java app (e.g. created FROM tomcat:alpine) and don't have access to the host (e.g. on a managed k8s cluster), you can add the following command to your dockerfile to use non-blocking seeding of SecureRandom:
RUN sed -i.bak \
-e "s/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g" \
-e "s/securerandom.strongAlgorithms=NativePRNGBlocking/securerandom.strongAlgorithms=NativePRNG/g" \
$JAVA_HOME/lib/security/java.security
the two regex expressions replace file:/dev/random by file:/dev/urandom and NativePRNGBlocking by NativePRNG in the file $JAVA_HOME/lib/security/java.security which causes tomcat to startup reasonably fast on a vm. i haven't checked whether this works also on non alpine-based openjdk images, but if the sed command fails, just check the location of the file java.security inside the container and adapt the path accordingly.
note: in jdk11 the path has changed to $JAVA_HOME/conf/security/java.security

Resources