File permissions for mapped folders on host machine - docker

Im running the official WordPress container for development, and have it mapped to a local folder on my host machine (OS X). Everything runs fine except WordPress can't write any files as it has no permissions as far as I can tell.
I've even tried adding 777 permissions to the uploads folder, it then creates the first folder but it goes to try adding another it fails again.
Is there any way I can get around this?

Related

VSCode Docker container not connecting files

I'm using a container for Tensorflow-GPU environment to avoid the hassle of setting one manually and I was following this guide: https://code.visualstudio.com/docs/remote/containers
I've set up the container and installed the necessary extensions and then I try to run "Open Folder in a Container" command. It works fine but none of my files get linked to the new working area inside the docker.
I felt like it was saying that I should get access to all you existing files and folders for the project inside a container.
Is this not how this works? What are the normal way of linking project from the host system onto the docker?
EDIT: This is what I get when I open my container with none of my files present

Process can write to docker volume on Windows not on Ubuntu

I have an image based on opencpu/base. It starts an apache based server, and then invokes R scripts everytime sombody calls an API endpoint.
One of those scripts tries to write a file to a location in the container. When I mount a folder into that location, it works on my Windows machine, but not on Ubuntu.
I've tried using named volumes on Ubuntu, but it does not work either. When I run bash inside the container interactively on Ubuntu, I can write and read the mounted volume just fine. But the apache process cannot.
Does anybody have some hints what could be going on here?
When you log in interactively to the container, you will have root permissions.
Apache usually runs as another user (www-data), and that user must have read permissions on the folder that you want it to read.
Make sure that the permissions of the folder matches the user that will read it.

Is it possible to use a single docker volume to map to two different directories?

I am using VS Code, Remote Development Containers, and Docker to create development environments within containers. Everything works fine, but I did notice that when working with different projects that doing things such as yarn install means having to download the npm modules each time. Of course, once a container image does this they are stored in the cache, specifically /usr/local/share/.cache/yarn/v6.
When I attempted to mount that folder to the host machine yarn install would start to fail too often, stating that it was having trouble downloading the package due to a bad network connection (the connection was just fine). So, I created a volume instead and everything worked just fine.
The problem I am running into is that I also want to share other folders in the volume so that multiple containers use the same cache for things such as NuGet packages. I was hoping to somehow have my volume look like so:
mysharedvolume/yarn => /usr/local/share/.cache/yarn/v6
mysharedvolume/nuget => /wherever/nuget/packages/are/cached
mysharedvolume/somefile.config => /wherever/somefile.config
This does not seem to be the way volumes work in docker, all of the files are mixed up at the root of the volume (there is no subdirectories). Of course, I can't simply map the entire /usr folder or anything like that, that's crazy.
Before I go off and create different volumes for each cache and config files, is there a way to do this with a single shared volume?

Docker-compose volumes not mounted correctly in VirtualBox under Windows

I am trying to run Hyperledger's BYFN Tutorial on a Win10 Home using Docker Toolbox, with VirtualBox 5.2.4. I am using the default image for the VirtualBox VM.
I have set up a shared folder (not in C:/Users, but on my other drive) and it seems to be functioning correctly - changes I make from either Windows, or the docker-machine are reflected in both places as intended. I successfully generate the network artifacts using "./byfn -m generate", but I get an error when trying to "./byfn up" it.
What happens is that, as far as I can see from the logs, all the containers get brought up correctly, but for some reason the volumes of the cli container are not attached correctly (I think). When byfn.sh finishes I get the following error:
When I ssh into the cli container, I can see the channel-artifacts, crypto and scripts folders, but their contents don't seem to correlate with the volumes: part of the docker-compose file. First, the scripts folder is empty (whereas in the docker-compose file it's specified that it should mount a bunch of files), so I get the above error. Second, the channel-artifacts containes only 1 directory named genesis.block, which should actually be a file. And in the crypto folder there are just a bunch of directories.
As you might have guessed, I'm pretty new at docker, so this might be intended behavior, but I'm still getting an error.
Please let me know if I can provide additional information. Thanks in advance.

How to persist data in a Docker .NET Core Web app?

I have trouble understanding how to work with data in a WebApi-app running with Docker.
In my app, a user can upload files which are stored like this:
~\App_Data\accounts\user123\files\<sha256>.bin
Without configuring any volumes, a Docker container with my app-image seem to work fine and writes files without any problems.
Now I'd like to configure so that the files ends up somewhere I can specify explicitly and not inside the default docker volumes folder.
I assume I need to create a volume mapping?
I have tried creating a folder and mapped it to "/App_Data". It still works as before, but I still don't see any files in this folder.
Is it a problem with write access on this folder? If not having access, will Docker fallback and write to a default volume?
What user/group should have write access to this folder? Is there a "docker" user?
I'm running this on a Synology NAS so I'm only using the standard Docker UI with the "Add Folder" button.
Here's the folders I have tried:
Got it working now!
The problem was this line:
var appDataPath = env.ContentRootPath + #"\App_Data";
which translated to #"/app\App_Data" when running in Docker.
First of all I was using a Windows dir separator '\' which I don't think work on Linux. Also I don't think the path can include the "/app" since it is relative to this folder. When running this outside of Docker in Windows I got a rooted path which worked better: #"c:\wwwroot\app\App_Data"
Anyway, by changing to this it started working as expected:
var appDataPath = #"/App_Data";
Update
Had a follow up problem. I wanted the path's to work both in Docker on Linux and with normal Windows hosting but I couldn't just use /App_Data as path because that would translate to c:\App_Data on Windows. So I tried using this path instead: ./AppData which worked fine in Windows, resulting in c:\wwwroot\app\App_Data. But this would still not work in Docker unfortunately. Don't get why though. Maybee Docker is really picky with the path-matching and only accepts an exact match, i e /App_Data because that's the path I have mapped to in the container-config.
Anyway, this was a real headache, have spent 6 hrs straight with this now. This was what I came up with that worked both on Linux and Windows. Not looking terribly nice but it works:
Path.Combine(Directory.GetCurrentDirectory().StartsWith("/") ? "/" : ".", "App_Data");
If you can come up with a better looking method, please feel free to let me know.
Update 2
Ok I think I get it now. I think. When running this in Docker every path has to be rooted with '/'. Relative path's are not allowed. My app-files are copied to the container path '/app' and I have mapped my data to '/data'. The current dir is set to '/app' but to access the data I obviously have to point to '/data' and not '/app/data'. I was mistakenly believing that all paths was relative to '/app' and not '/'. The reason for this is likely since I have my data-files inside the app-folder when running this in standard Windows hosting (which probably not is a very good idea in any case). This however confused me to think the same applied for my Docker environment.
Now that I realized this it is a lot more clearer. I have to use '/data' and not './data' or '/app/data' or even 'data' (which is also relative) etc.
In standard Windows hosting where relative paths are ok I can still use './data' or any other relative path which will be resolved relative to ContentRootPath/CurrentDir. However a absolute rooted path like '/data' will not work because it will resolve to 'c:\data' (relative to the root of the current drive).
I suggest you do a mapping of the volume, in the application is done as follows. Check that this is not read mode because you will not see the files.
Volume:Since Transmission is a downloader, we need a way to access the file downloaded. Without mapping a physical shared folder on Synology NAS, all downloaded files will be stored in the containers and are difficult to retrieve.In Transmission’s Dockerfile page, we saw two volumes in Transmission: /config and /downloads. We will now perform the following to map these two volumes to the physical shared folders on Synology NAS:
Un-check the Read-Only option as we need to grant Transmission permission to write data into the physical drives.

Resources