Permissions issue with shared host drive mounted to two docker containers - docker

Given I have 2 containers, Weblogic and Tomcat.
Weblogic runs under oracle user, Tomcat runs under root user.
I use the same volume mapping for both services, so that application deployed in Tomcat orchestrates business process in which application deployed in Weblogic saves files to that shared folder.
I came across the issue with permissions because Tomcat runs under root (creates directory structure with root owner and group) and Weblogic running under oracle can't save files.
What is the best way to handle shared host data folder between two containers and avoid problems with permissions?

The unix/linux solutions to this are to use either:
The same UID and open permissions on the user
The same GID and open permissions on the group
None of the above and open permissions for everyone
These options all apply identically for apps running inside of containers.
The third option is least ideal since it allows anyone on the host to modify these files. However, implementing it is a quick chmod -R 777 dir and updating the umask to be 000 for any apps that create files in that directory.
That leaves option 1 or 2. Option 1 means either dropping root for Tomcat, or running Weblogic as root, the former being preferred but may not be possible depending on the app.
If option 1 isn't possible, try using a common group between the two apps. Add the users to the same GID in both images, and in your directory, change the group to that common GID and set the group sticky bit in your permissions to ensure every file in that directory are also created as that group.
chgrp $gid dir
chmod g+s dir

Related

Process can write to docker volume on Windows not on Ubuntu

I have an image based on opencpu/base. It starts an apache based server, and then invokes R scripts everytime sombody calls an API endpoint.
One of those scripts tries to write a file to a location in the container. When I mount a folder into that location, it works on my Windows machine, but not on Ubuntu.
I've tried using named volumes on Ubuntu, but it does not work either. When I run bash inside the container interactively on Ubuntu, I can write and read the mounted volume just fine. But the apache process cannot.
Does anybody have some hints what could be going on here?
When you log in interactively to the container, you will have root permissions.
Apache usually runs as another user (www-data), and that user must have read permissions on the folder that you want it to read.
Make sure that the permissions of the folder matches the user that will read it.

Which file permissions should be set for NGINX hosted files?

When running an NGINX server that hosts static content, which file permissions should be set? The main consideration is to have the safest configuration.
I currently have two dockerized NGINX servers behind a reverse-proxy, one of them containing files with 1000:1000 (copied directly from the host machine), the other with root:root (copied from a multi-stage build). The current configuration works, but I would like to know the best practice.
Folders need read and execute (to traverse). Static files just need read. I assume your server is not running scripts. Ownership should not be root but a user/group usable by nginx.

WAS Liberty Docker Image Deployment Issue in Openshift

Am able to deploy Liberty docker image in Local Docker container and can access Liberty server.
I pushed the liberty image to Minishift installed in my system ,but when am going to create docker container, am facing error as follows:
Is anyone tried this before, please share your view:
Log Trace:
unable to write 'random state'
mkdir: cannot create directory '/config/configDropins': Permission denied
/opt/ibm/docker/docker-server: line 32:
/config/configDropins/defaults/keystore.xml: No such file or directory
JVMSHRC155E Error copying username into cache name
JVMSHRC686I Failed to startup shared class cache. Continue without
using it as -Xshareclasses:nonfatal is specified
CWWKE0005E: The runtime environment could not be launched.
CWWKE0044E: There is no write permission for server directory
/opt/ibm/wlp/output/defaultServer
By default OpenShift will run images as an assigned user ID unique to a project. Many available images have been written so that they can only be run as root, even though they have no requirement to run as root.
If you try and run such an image, because directories/files have been set up so they are only writable by the root user, running the image as a non root user ID will cause it to fail.
Best practice is to write images so that can be run as an arbitrary user ID. Unfortunately very few people do this, with the result that their images cannot be used in more secure multi tenant environments for deploying applications in containers.
OpenShift documentation provides guidelines on how to implement images so that can run in such more secure environments. See section 'Support Arbitrary User IDs' in:
https://docs.openshift.org/latest/creating_images/guidelines.html
If the image is built by a third party and they show no interest in making the changes to their image so works in secure multi tenant environments, you have a few options.
The first is to create a derived image which in the steps to build it, goes back and fixes permissions on the directories and files so can be used. Note that in doing this you have to be careful what you change permissions on, as changing permissions on files in a derived image caused a complete copy of the file to be made. If files are large, this will start to blow out your image size.
The second is if you are admin on the OpenShift cluster, you can relax security on the cluster for the service account the image is run as so that it is allowed to run the container as root. You should avoid doing this if possible, especially with third party images which you do not trust. For details on how to do this see:
https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
A final way that might be able to be used with some images if total size of what needs to have permissions fixed is small, is to use an init container to make a copy of the directories that need write access to an emptyDir volume. Then in the main container mount that emptyDir volume on top of the directory copied. This avoids needing to modify the image or enable anyuid. The amount of space available in emptyDir volumes may not be enough if have to copy application binaries as well. This is probably only going to work where the application wants to update config files or create lock files. You wouldn't be able to use this if the same directory is used for large amounts of transient file system data such as cache database or logs.

How to prevent root of the host OS from accessing content inside a container?

We are new to container technology and are currently evaluating whether it can be used in our new project. One of our key requirements is data security of multi-tenants. i.e. each container contains data that is owned by a particular tenant only. Even we, the server admin of the host servers, should NOT be able to access the content inside a container.
By some googling, we know that root of the host OS can execute command inside a container, for example, by the "docker execute" command. I suppose the command is executed with root privileges?
How to get into a docker container?
We wonder if such kinds of access (not just "docker execute", but also any method to access a container's content by server admin of the host servers) can be blocked/disabled by some security configurations?
For the bash command specifically,
Add the exit command in the end of the .bashrc file. So the user logs in and finally gets kicked out.
You can go through this link for the better understanding of why it is not implemented by default
https://github.com/moby/moby/issues/8664

strategy to run Grunt in Docker

We're developing an application for which I've setup three docker containers (web, db and build) which we run with docker-compose.
I've configured docker so that the hosts folder (html) is shared as a writeable folder to web and build.
Inside the build container runs a Grunt watch task as a user node which has the uid 1000.
As the Grunt tasks regulary builds CSS and JavaScript files, these files belong to the user 1000. As our whole team uses these setup to develop, the files for every team mate belong to another ("random") user, the user which has the uid 1000.
What would be the best strategy to avoid this problem? I could think of running the Grunt task with the userid of the hosts user, which started the container. But how to accomplish that?
I should mention, that we don't need those generated files for the version control. So it would be okay when the generated files are local to the docker container. But as the locations in which these files are generated are spread across the whole application, I don't have an idea how I could solve the problem with a read-only volume.

Resources