Cannot call chown inside Docker container (Docker for Windows) - docker

I am attempting to use the official Mongo dockerfile to boot up a database, I am using the -v command to map a local directory to /data inside the container.
As part of the Dockerfile, it attempts to chown this directory to the user mongodb:
RUN mkdir -p /data/db /data/configdb \
&& chown -R mongodb:mongodb /data/db /data/configdb
VOLUME /data/db /data/configdb
However, this fails with the following command:
chown: changing ownership of '/data/db': Permission denied
What I am doing wrong here? I cannot find any documentation around this - surely the container should have full permissions to the mapped directory, as it was explicitly passed in the docker run command:
docker run -d --name mongocontainer -v R:\mongodata:/data/db -p 3000:27017 mongo:latest

You have similar issues illustrating the same error message in mongo issues 68 or issue 74
The host machine volume directory cannot be under /Users (or ~). Try:
docker run --name mongo -p 27017:27017 -v /var/lib/boot2docker/my-mongodb-data/:/data/db -d mongo --storageEngine wiredTiger
The PR 470 adds:
WARNING: because MongoDB uses memory mapped files it is not possible to use it through vboxsf to your host (vbox bug).
VirtualBox shared folders are not supported by MongoDB (see docs.mongodb.org and related jira.mongodb.org bug).
This means that it is not possible with the default setup using Docker Toolbox to run a MongoDB container with the data directory mapped to the host.

Related

Runnig docker as nonRoot with --user $(id -u) cant create /var/lib/

Hello im fairly new to docker and i am trying to get influxdb and grafana up and running.
I already went through some problem solving and want to get you on the same page with a little summary.
Got a docker-compose file from here
did sudo docker-compose up -d
ran into the problem, that arguments like INFLUXDB_DB=db0 insdide the docker-compose.yml are not applied by the containers. So the databaes db0 wasnt created for example.
changes to the containers though would persist. So i could create a database and after a restart it was still there
tested each container as standalone with docker run
figured out if I used bind mount instead of docker volumes it worked for influxdb
the grafana container wouldn't start
sudo docker run --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 grafana/grafana:latest
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
read here that I need do define a user with $(id -u) if I want to use bind mount with grafana
did that, but then the user has no permission to create the /var/lib/grafana directory
sudo docker run --user $(id -u) --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 grafana/grafana:latest
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
when i set the --user argument to root with 0:0 it works but i read some best practices where running as root for testing is ok but for production it would be not ideal.
i also read that i can add a user to the docker group to give the user the permissions
there was no docker group on my system so i read here, that i can create one and then adding the docker.socket to that group via /etc/docker/daemon.json but that file doesnt exist and i cant create it and i think i am pretty deep down the rabbit hole to just stop and ask if i am on the wrong path and did something wrong.
How can i start the containers as nonRoot without giving them to much permissions is my main question i think.
using:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
I solved this by running the container as root, chown the data dir and then su to grafana user. In docker-compose, this looks like:
grafana:
image: grafana/grafana:8.2.3
volumes:
- ./data/grafana:/var/lib/grafana
user: root
entrypoint:
- /bin/sh
- -c
- |
chown grafana /var/lib/grafana
exec su grafana -s /bin/sh -c /run.sh
I remember dealing with similar issues long ago.
I recommend
one time
mkdir data/grafana
and then
sudo docker run --volume "$PWD/data:/var/lib" -p 3000:3000 grafana/grafana:latest
because as I recall whenever you volume mount something it mounts as owned by root. There is no way to change the ownership of the mounted volume.
But you can change the ownership of files and directories inside the mounted volume and they do not have to be root.
So in my solution you are just mounting data to /var/lib, /var/lib will be owned by root but /var/lib/grafana will be owned by a regular user.
This would obviously hide the contents of any other folders in the /var/lib directory but maybe there is not anything important there. If there is then could you configure the program to look for /var/lib/grafana at some other location. For example /srv/grafana then change the invocation to
sudo docker run --env GRAFANA_DIR=/srv/grafana --volume "$PWD/data:/srv" -p 3000:3000 grafana/grafana:latest
Lastly, I do not see why it would be a security issue to run as root.
That is something I am not sure I agree with.

Permission denied after start container jenkins

When creating a jenkins container, the following errors appear. What could be the problem?
jenkins_1 | touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
jenkins_1 | Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
It is my docker-compose:
version: '3.7'
services:
jenkins:
image: jenkins/jenkins:lts
ports:
- 7080:8080
- 50000:50000
privileged: true
volumes:
- /tmp/jenkins-test:/var/jenkins_home
This is explained in the issue.
sudo chown 1000 /tmp/jenkins-test
If the directory already contains files:
sudo chown -R 1000 volume_dir
This will store the jenkins data in /your/home on the host. Ensure
that /your/home is accessible by the jenkins user in container
(jenkins user - uid 1000) or use -u some_other_user parameter with
docker run.
You must set the correct permissions in the host before you mount
volumes sudo chown 1000 volume_dir
or you can try
Resolved albeit with torture involved.
Create a jenkins user on the host, note it's uid
docker run -u <jenkins-uid> ...
Do NOT docker run -u 'jenkins' - This causes the container's own
jenkins user to continue to be used. Either choose a different name on
the host and pass this through or pass through the resultant uid.
A bash script that can you try to run
#!/bin/bash
mkdir $PWD/jenkins
sudo chown -R 1000:1000 $PWD/jenkins
docker run -d -p 8080:8080 -p 50000:50000 -v $PWD/jenkins:/var/jenkins_home --name jenkins jenkins
if after doing all the permission and all things still if its not working then just change the volumes mapping like this..
./your_folder:/var/jenkins_home
it will work fine..some time pwd is not works that will the create issue.

Jenkins Docker image, to use bind mounts or not?

I am reading through this bit of the Jenkins Docker README and there seems to be a section that contradicts itself from my current understanding.
https://github.com/jenkinsci/docker/blob/master/README.md
It seems to me that is says to NOT use a bind mount, and then says that using a bind mount is highly recommended?
NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission
issues (the user used inside the container might not have rights to
the folder on the host machine). If you really need to bind mount
jenkins_home, ensure that the directory on the host is accessible by
the jenkins user inside the container (jenkins user - uid 1000) or use
-u some_other_user parameter with docker run.
docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p
50000:50000 jenkins/jenkins:lts this will run Jenkins in detached mode
with port forwarding and volume added. You can access logs with
command 'docker logs CONTAINER_ID' in order to check first login
token. ID of container will be returned from output of command above.
Backing up data
If you bind mount in a volume - you can simply back up
that directory (which is jenkins_home) at any time.
This is highly recommended. Treat the jenkins_home directory as you would a database - in Docker you would generally put a database on
a volume.
Do you use bind mounts? Would you recommend them? Why or why not? The documentation seems to be ambiguous.
As commented, the syntax used is for a volume:
docker run -d -v jenkins_home:/var/jenkins_home -n jenkins ...
That defines a Docker volume names jenkins_homes, which will be created in:
/var/lib/docker/volumes/jenkins_home.
The idea being that you can easily backup said volume:
$ mkdir ~/backup
$ docker run --rm --volumes-from jenkins -v ~/backup:/backup ubuntu bash -c “cd /var/jenkins_home && tar cvf /backup/jenkins_home.tar .”
And reload it to another Docker instance.
This differs from bind-mounts, which does involve building a new Docker image, in order to be able to mount a local folder owner by your local user (instrad of the default user defined in the official Jenkins image: 1000:1000)
FROM jenkins/jenkins:lts-jdk11
USER root
ENV JENKINS_HOME /var/lib/jenkins
ENV COPY_REFERENCE_FILE_LOG=/var/lib/jenkins/copy_reference_file.log
RUN groupmod -g <yourId>jenkins
RUN usermod -u <yourGid> jenkins
RUN mkdir "${JENKINS_HOME}"
RUN usermod -d "${JENKINS_HOME}" jenkins
RUN chown jenkins:jenkins "${JENKINS_HOME}"
VOLUME /var/lib/jenkins
USER jenkins
Note that you have to declare a new volume (here /var/lib/jenkins), because, as seen in jenkinsci/docker issue 112, the official /var/jenkins_home path is already declared as a VOLUME in the official Jenkins image, and you cannot chown or chmod it.
The advantage of that approach would be to see the content of Jenkins home without having to use Docker.
You would run it with:
docker run -d -p 8080:8080 -p 50000:50000 \
--mount type=bind,source=/my/local/host/jenkins_home_dev1,target=/var/lib/jenkins \
--name myjenkins \
myjenkins:lts-jdk11-2.190.3
sleep 3
docker logs --follow --tail 10 myjenkins

Error while mounting host directory in Nexus Docker

I am using the following command to run my container
docker run -d -p 9001:8081 --name nexus -v /Users/user.name/dockerVolume/nexus:/nexus-data sonatype/nexus3
Container starts and fail immediately. with the following logs
mkdir: cannot create directory '../sonatype-work/nexus3/log':
Permission denied
mkdir: cannot create directory
'../sonatype-work/nexus3/tmp': Permission denied
Java HotSpot(TM)
64-Bit Server VM warning: Cannot open file
../sonatype-work/nexus3/log/jvm.log due to No such file or directory
I was following this link to set it up
I have given said permission to nexus directory.
I also tried the following SO link but that didn't help me either.
I was still getting the same error.
Docker Version 17.12.0-ce-mac47 (21805)
[EDIT]
I did made changes to the ownership of my nexus folder on my host
sudo chown -R 200 ~/dockerVolume/nexus
In my ubuntu server I had to perform:
chown -R 200:200 path/to/directory
Not only 200, but 200:200
If you have this problem trying to run Nexus3 inside of Kubernetes cluster, you should set UID with initContainers. Just add it to your spec:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chown -R 200:200 /nexus-data"]
volumeMounts:
- name: <your nexus pvc volume name>
mountPath: /nexus-data
That Dockerfile is available, in the repo sonatype/docker-nexus3.
And mounting a volume is documented as:
Mount a host directory as the volume.
This is not portable, as it relies on the directory existing with correct permissions on the host. However it can be useful in certain situations where this volume needs to be assigned to certain specific underlying storage.
$ mkdir /some/dir/nexus-data && chown -R 200 /some/dir/nexus-data
$ docker run -d -p 8081:8081 --name nexus -v /some/dir/nexus-data:/nexus-data sonatype/nexus3
So don't forget to do, before your docker run:
chown -R 200 /Users/user.name/dockerVolume/nexus

How can I fix the permissions using docker on a bluemix volume?

In a container, I am trying to start mysqld.
I was able to create an image and push to the registry but when I want to start it, the /var/lib/mysql volume can't be initialized as I try to do a chown mysql on it and it is not allowed.
I checked docker specific solutions but for now I couldn't make any work.
Is there a way to set the right permissions on a bind-mounted folder from bluemix? Or is the option --volumes-from supported, I can't seem to make it work.
The only solution I can see right now is running mysqld as root, but I would rather not.
Try with mount-bind
created a volume on bluemix using cf ic volume create database
try to run mysql_install_db on my db container to initialize it's content
docker run --name init_vol -v database:/var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
mysql_install_db is supposed to populate the /var/lib/mysql and set the rights to the owner set in the --user option, but I get:
chown: changing ownership of '/var/lib/mysql': Permission denied.
I also tried the above in different ways, using sudo or a script. I tried with mysql_install_db --user=root, which does setup my folder correctly, except it is owned by the root user, and I would rather keep mysql running as the mysql user.
Try with volumes-from data container
I create a data container with a volume /var/lib/mysql
docker run --name db_data -v /var/lib/mysql registry.ng.bluemix.net/<namespace>/<image>:<tag> mysql_install_db --user=mysql
I run my db container with the option --volumes-from
docker run --name db_srv --volumes-from=db_data registry.ng.bluemix.net/<namespace>/<image>:<tag> sh -c 'mysqld_safe & tail -f /var/log/mysql.err'
docker inspect db_srv shows:
[{ "BluemixApp": null, "Config": {
...,
"WorkingDir": "",
... } ... }]
cf ic logs db_srv shows:
150731 15:25:11 mysqld_safe Starting mysqld daemon with databases from
/var/lib/mysql 150731 15:25:11 [Note] /usr/sbin/mysqld (mysqld
5.5.44-0ubuntu0.14.04.1-log) starting as process 377 .. /usr/sbin/mysqld: File './mysql-bin.index' not found (Errcode: 13)
150731 15:25:11 [ERROR] Aborting
which is due to --volumes-from not being supported, and to data created in the first not staying in the second run.
In IBM Containers, the user namespace is enabled for docker engine. The "Permission denied " issue appears to be because the NFS is not allowing mapped user, from container, to perform the operation.
On my local setup, on the docker host, mounted a NFS (exported with no_root_squash option). and attached the volume to container using -v option. When the container is
spawned from docker with disabled user namespace, I am able to change the ownership for bind-mount inside the container. But With user namespace enabled docker, I am getting
chown: changing ownership of ‘/mnt/volmnt’: Operation not permitted
The volume created by cf (cf ic volume create ...) is a NFS, to verify just try mount -t nfs4 from container.
When, the user namespace is enabled for docker engine. The effective root inside the container is a non-root user out side the container process and NFS is not allowing the mapped non-root user to perform the chown operation on the volume inside the container.
Here is the work-around, you may want to try
In the Dockerfile
1.1 Create user mysql with UID 1010, or any free ID, before MySql installation.
Other Container or new Container can access mysql data files on Volume with UID 1010
RUN groupadd --gid 1010 mysql
RUN useradd --uid 1010 --gid 1010 -m --shell /bin/bash mysql
1.2 Install MySqlL but do not initialize database
RUN apt-get update && apt-get install -y mysql-server && rm -rf /var/lib/mysql && rm -rf /var/lib/apt/lists/*
In the entry point script
2.1 Create mysql Data directory under bind-mount as user mysql and then link it as /var/lib/mysql
Suppose the volume is mounted at /mnt/db inside the container (ice run -v <volume name>:/mnt/db --publish 3306... or cf ic run --volume <volume name>:/mnt/db ...).
Define mountpath env var
MOUNTPATH="/mnt/db"
Add mysql to group "root"
adduser mysql root
Set permission for mounted volume so that root group members can create directory and files
chmod 775 $MOUNTPATH
Create mysql directory under Volume
su -c "mkdir -p /mnt/db/mysql" mysql
su -c "chmod 700 /mnt/db/mysql" mysql
Link the directory to /var/lib/mysql
ln -sf /mnt/db/mysql /var/lib/mysql
chown -h mysql:mysql /var/lib/mysql
Remove mysql from group root
deluser mysql root
chmod 755 $MOUNTPATH
2.2 For first time, initialize database as user mysql
su -c "mysql_install_db --datadir=/var/lib/mysql" mysql
2.3 Start the mysql server as user mysql
su -c "/usr/bin/mysqld_safe" mysql
You have multiple questions here. I will try to address some. Perhaps that will get you a step further in the right direction.
--volumes-from is not supported yet in IBM Containers. You can get around that by using the same --volume (-v) option on the first and subsequent containers, instead of using -v on the first container creation command and --volumes-from on the subsequent ones.
--user option is not supported also by IBM Containers.
I see your syntax for using --user (I suppose on localhost docker) is not correct. All options for the docker run command must come before the image name. Anything after the image name is considered a command to run inside the container. In this case "--user=mysql" will be considered as a command that the system will attempt to run and fail.
The last error message you shared shows that there is some file not found in the working dir which causes the app to abort. You may work around that by using a script as the command to run in the container which changes dir to the right location.

Resources