Give access to a Samba share to a started Docker container - docker

I must access a Samba share from a Docker container. The container starts when the PC starts, but the Samba share is only available a few minutes later.
Once available the samba share is mounted in /media/username/nas/SambaShare, and I have this in my docker-compose.yml:
volumes:
- /media/username/nas:/media/username/nas
But to access the Samba share from the container, I must restart it. Is there a way to avoid that?
EDIT: similarly, if I mount the Samba share, restart the container, and then unmount the Samba share, the container still have access to it. Why?

Related

Access SMB share through docker image or through docker host's connection to SMB share?

I have a service or three that needs access to the same SMB share. The service(s) is running inside a Docker container. I think my choices are:
Have the Docker container(s) where the service(s) is running mount the SMB share itself
Have the host of the Docker container(s) mount the SMB share and then share it with the Docker container(s) where the service(s) is running
Which is better from a best practices perspective (which should probably include security as a dimension)?
Am I missing an option?
Thanks!
In standard Docker, you should almost always mount the filesystem on the host and then use a bind mount to attach the mounted filesystem to a container.
Containers can't usually call mount(2), a typical image won't contain smbclient(1) or mount.cifs(8), and safely passing credentials into a container is tricky. It will be much easier to do the mount on the host, and you can use standard images against the mounted filesystem without having to customize them to add site-specific tools.
One way is to mount the SMB shares on the host system as normal, for example if you are on Linux using mount and fstab. Afterwards you can use docker volumes to add the SMB shares, on your host system to your containers as volumes.
Advantages of using docker volumes are explained in the docker documentation.
More information about docker volumes in the docker documentation,
https://docs.docker.com/storage/volumes/

Proper way to access fileshare with docker ubuntu server

I’m learning both Linux(ubuntu) and docker.
I’m so far that my servies is running as it should in my docker container.
The service needs to access a fileshare for data access.
I also want the docker mounts to reside on a fileshare so noting “important” lies on the host itself.
How should I set this up properly?
If would think that for the docker mounts that this should be done on the host?
The file access from the docker however should I set this up via the dockerfile or is it just as fast to set it up on the host and mount it through docker?
There is no security concern as to mount on host in this case.
My next question will probably be how do I mount the share correct?
Regard Lars

Accessibility in Docker volumes

I'm reading a document from Microsoft that states about Docker volumes
Volumes are stored within directories on the host filesystem. Docker
will mount and manage the volumes in the container. Once mounted,
these volumes are isolated from the host machine.
Multiple containers can simultaneously use the same volumes. Volumes
also don't get removed automatically when a container stops using the
volume.
In our example, we can create a directory on our container host and
mount this volume into the container when we create the tracking
portal container. When our tracking portal logs data, we can access
this information via the container host's filesystem. We'll have
access to this log file even if our container is removed.
I'm confused as I understand that the volumes are isolated from the host machine, but how can that be if we can access to the data via the host.
I'm less familiar with Docker on Windows, but I'm sure it's probably the same as Linux in this regard...
Docker volumes are "isolated on the host machine" by being in a particular location with particular permissions on the host's filesystem (i.e. via namespaces). Users/accounts with elevated permissions would still be granted access to those directories/files.
By contrast a bind mount can be made to (pretty much) any directory on the host's file system.

Mount rexray/ceph volume in multiple containers on Docker swarm

What I have done
I have built a Docker Swarm cluster where I am running containers that have persistent data. To allow the container to move to another host in the event of failure I need resilient shared storage across the swarm. After looking into the various options I have implemented the following:
Installed a Ceph Storage Cluster across all nodes of the Swarm and create a RADOS Block Device (RBD).
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
Installed Rexray on each node and configure it to use the RBD created above. https://rexray.readthedocs.io/en/latest/user-guide/storage-providers/ceph/
Deploy a Docker stack that mounts a volume using the rexray driver e.g.
version: '3'
services:
test-volume:
image: ubuntu
volumes:
- test-volume:/test
volumes:
test-volume:
driver: rexray
This solution is working in that I can deploy a stack, simulate a failure on the node that is running then observe the stack restarted on another node with no loss of persistent data.
However, I cannot mount a rexray volume in more than one container. My reason for doing is to use a short lived "backup container" that simply tars the volume to a snapshot backup while the container is still running.
My Question
Can I mount my rexray volumes into a second container?
The second container only needs read access so it can tar the volume to a snapshot backup while keeping the first container running.
Unfortunately the answer is no, in this use case rexray volumes cannot be mounted into a second container. Some information below will hopefully assist anyone heading down a similar path:
Rexray does not support multiple mounts:
Today REX-Ray was designed to actually ensure safety among many hosts that could potentially have access to the same host. This means that it forcefully restricts a single volume to only be available to one host at a time. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
But Rexray does support a feature called pre-emption where:
..if a second host does request the volume that he is able to forcefully detach it from the original host first, and then bring it to himself. This would simulate a power-off operation of a host attached to a volume where all bits in memory on original host that have not been flushed down is lost. This would support the Swarm use case with a host that fails, and a container trying to be re-scheduled.
(https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
However, pre-emption is not supported by the Ceph RBD.
(https://rexray.readthedocs.io/en/stable/user-guide/servers/libstorage/#preemption)
You could of course have a container that attaches the volume, and then exports it via nfs on a dedicated swarm network, the client containers could then access it via nfs

Jupyterhub deployed via docker: Connect with a samba drive

So I have set up a Jupyterhub deployed with docker. As described here.
The users would like to connect to a samba share from within their notebooks. In order for this to work, I wanted to write a small bash script. The scripts asks the user for their credentials to connect to the samba share drive. So here are my questions:
I have to open port 445 and 139 of the notebook server container, direct them through the jupyterhub container to the system's ports 445 and 139. Where and how can I achieve this in the docker deployed Jupyterhub framework that I have?
I have to grant the users SYS_ADMIN and DAC_READ_SEARCH capabilities. Suppose I don't trust the users. Do you think this is a good idea? What is the worst case scenario... please scare me. :D
Is there a safer way like some service running in an extra container that handles the sambashare request, creates a docker volume and mounts it during the runtime of the user's container to that same user's container?

Resources