Iotedge windows container volume access - docker

I have a windows container module that is supposed to write to a simple text file inside the volumes folder on the host machine.
The module is hardcoded to write the same thing to the same file on start up (this is for testing purposes).
Expected behavior
The module is initialized and a volume is created on the host machine and a text file is created in that volume.
Actual Behavior
The module is not allowed to write to its volume and I get the below access permission issue.
Volume Access Permission Issue
If I add "Users" to the volume folder and give that group permission to modify the volume then everything works.
Question
Is there a way to do this without changing volume access options manually every time? If not what is the best practice for allowing volume access to its windows container?
Device Info
Windows 10 Enterprise LTSC
iotedge 1.1.3

Do you have the same behavior in the default path for the Moby engine volumes?
Path: C:\ProgramData\iotedge-moby\volumes
Command to create/set:
docker -H npipe:////./pipe/iotedge_moby_engine volume create testmodule
In this volume I never had a problem (currently we use Edge Runtime 1.1.4 + Windows Server 2019).
If we use a directory outside this "default" volume, we need to manually authorize the "Authenticated Users" (Modify, Read, Write, List and Execute) to allow the container/Moby engine to write/read there.

Related

Is it possible if I want to change my docker data-root directory to another server?

I installed docker inside of a server or what I usually called as a VM (Virtual Machine, with RHEL environment) and I have been thinking of using a new directory in a remoted Windows network as my new data-root directory (due to space reason).
In my case, the data-root directory from my VM located at /home/docker_new and the path where I want to use as a new data-root directory from the remote Windows network is for example, at
\\xx.xx.x.x\a\b\c.
I did try to make my research and finds that most of the solutions are all focusing on changing the data-root directory within the same VM. Some focusing on how to 'move' the docker to another server. While what I intended to do is just to change the data-root directory to another directory in a remoted windows network .
So, my question is:
1. Is it possible to do so?
2. If its possible, what are the steps that I should follow?

Docker container bind host directory

I wish I could update the settings of the application by changing the local profile.
I use "volume" to bind a local directory, for example:
docker run -v D:\test:/app
But when the container is running, all files in /app are emptied, because D:\test does not have any files.
Is there any way I can achieve my goal
Your question is a bit unclear. I guess you're problem is the following: You want to bind mount your app directory, but it is initially empty and will stay empty since the bind mount overwrites everything put into /app during build
I usually use two different ways:
Put your profile into host directory D:\test (if applicable). This is also a viable strategy for e.g. the source code of nodejs apps
During build, put your profile into /app_temp. Then create an entry point which moves /app_temp into /app. If you want to persist the profile through multiple build/run phases, it has to be inside the build context (which is likely not D:\test) on your host.
You need to change a bit the way your application is organized. put all the settings in their own directory and have the application read them from there. Then you can map only the settings folder you the host one.
Another option is to map the host folder to a temporary folder inside the container and have the ENTRYPOINT script update your files (by copying them over) and then run your application.
Docker was not meant to be used for the workflow you are trying to setup and for this reason you need to do some extra work.
because D:\test does not have any files.
That the way it works. Volume type you use is bind mount, i.e. you mount file system, using mount point mapped to a host directory.
According to documentation:
With bind mounts, we control the exact mountpoint on the host. We can use this to persist data, but it’s often used to provide additional data into containers.
You have two options here (both imply host data should exist in advance):
Bind to a folder, containing configuration data. As you showed.
It is possible to bind only file:
docker run -v D:\test\config.json:/app/config.json
While binding to a file, if it does not exist beforehand, docker-daemon would think it is a directory and will create directory, both in container and on the host.
you mount file system, using mount point mapped to a host directory
Hence, if host directory is empty mounted file system would also be empty.

Process can write to docker volume on Windows not on Ubuntu

I have an image based on opencpu/base. It starts an apache based server, and then invokes R scripts everytime sombody calls an API endpoint.
One of those scripts tries to write a file to a location in the container. When I mount a folder into that location, it works on my Windows machine, but not on Ubuntu.
I've tried using named volumes on Ubuntu, but it does not work either. When I run bash inside the container interactively on Ubuntu, I can write and read the mounted volume just fine. But the apache process cannot.
Does anybody have some hints what could be going on here?
When you log in interactively to the container, you will have root permissions.
Apache usually runs as another user (www-data), and that user must have read permissions on the folder that you want it to read.
Make sure that the permissions of the folder matches the user that will read it.

rbind usage on local volume mounting

I have a directory that is configured to be managed by an automounter (as described here). I need to use this directory (and all directories that are mounted inside) in multiple pods as a Local Persistent Volume.
I am able to trigger the automounter within the containers, but there are some use-cases when this directory is not empty when the container starts up. This makes sub-directories appear as empty and not being able to trigger the automounter (whithin the container)
I did some investigation and discovered that when using Local PVs, there is a mount -o bind command between the source directory and some internal directory managed by the kubelet (this is the line in the source code).
What I actually do need is rbind to be used (recursive binding - here is a good explanation).
Using rbind also requires some changes to the part that unmounts the volume (recursive unmounting is needed)
I don't want to patch the kubelet and recompile it..yet.
So my question is: are there some official methods to provide to Kubernetes some custom mounter/unmounter?
Meanwhile, I did find a solution for this use-case.
Based on Kubernetes docs there is something called Out-Of-Tree Volume Plugins
The Out-of-tree volume plugins include the Container Storage Interface (CSI) and FlexVolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository
Even that CSI is encouraged to be used, I chose FlexVolume to implement my custom driver. Here is a detailed documentation.
This driver is actually a py script that supports three actions: init/mount/unmount (--rbind is used to mount that directory managed by automounter and unmounts it like this). It is deployed using a DaemonSet (docs here)
And this is it!

Dockerized executable read/write on host filesystem

I just dockerized an executable that reads from a file and creates a new file in the very directory that file came from.
I want to use Docker in that setup, so that I avoid installing numerous third-party libraries in the production environment.
My problem now: I have file /this/is/a.file on my underlying (host) file system and my executable is supposed to create /this/is/b.file.
As far as I see it, the only chance to get this done is by mapping a volume that points to /this/is and then let the executable know where I mounted it to in the docker, container.
Am I right? Or is there a way that I just pass docker run mydockerizedstuff /this/is/a.file without using Docker volumes?
You're correct, you need to pass in /this/is as a volume and the executable will write to that location.
If you want to constrain the thing even more, you can pass /this/is/b.file as a volume. You need to create it (simply via touch) beforehand, otherwise Docker will consider it a directory and create it as such for you, but you'll know that the thing won't be able to create /this/is/c.file or any other thing.

Resources