Permission denied after start container jenkins - docker

When creating a jenkins container, the following errors appear. What could be the problem?
jenkins_1 | touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
jenkins_1 | Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
It is my docker-compose:
version: '3.7'
services:
jenkins:
image: jenkins/jenkins:lts
ports:
- 7080:8080
- 50000:50000
privileged: true
volumes:
- /tmp/jenkins-test:/var/jenkins_home

This is explained in the issue.
sudo chown 1000 /tmp/jenkins-test
If the directory already contains files:
sudo chown -R 1000 volume_dir
This will store the jenkins data in /your/home on the host. Ensure
that /your/home is accessible by the jenkins user in container
(jenkins user - uid 1000) or use -u some_other_user parameter with
docker run.
You must set the correct permissions in the host before you mount
volumes sudo chown 1000 volume_dir
or you can try
Resolved albeit with torture involved.
Create a jenkins user on the host, note it's uid
docker run -u <jenkins-uid> ...
Do NOT docker run -u 'jenkins' - This causes the container's own
jenkins user to continue to be used. Either choose a different name on
the host and pass this through or pass through the resultant uid.
A bash script that can you try to run
#!/bin/bash
mkdir $PWD/jenkins
sudo chown -R 1000:1000 $PWD/jenkins
docker run -d -p 8080:8080 -p 50000:50000 -v $PWD/jenkins:/var/jenkins_home --name jenkins jenkins

if after doing all the permission and all things still if its not working then just change the volumes mapping like this..
./your_folder:/var/jenkins_home
it will work fine..some time pwd is not works that will the create issue.

Related

Runnig docker as nonRoot with --user $(id -u) cant create /var/lib/

Hello im fairly new to docker and i am trying to get influxdb and grafana up and running.
I already went through some problem solving and want to get you on the same page with a little summary.
Got a docker-compose file from here
did sudo docker-compose up -d
ran into the problem, that arguments like INFLUXDB_DB=db0 insdide the docker-compose.yml are not applied by the containers. So the databaes db0 wasnt created for example.
changes to the containers though would persist. So i could create a database and after a restart it was still there
tested each container as standalone with docker run
figured out if I used bind mount instead of docker volumes it worked for influxdb
the grafana container wouldn't start
sudo docker run --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 grafana/grafana:latest
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
read here that I need do define a user with $(id -u) if I want to use bind mount with grafana
did that, but then the user has no permission to create the /var/lib/grafana directory
sudo docker run --user $(id -u) --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 grafana/grafana:latest
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
when i set the --user argument to root with 0:0 it works but i read some best practices where running as root for testing is ok but for production it would be not ideal.
i also read that i can add a user to the docker group to give the user the permissions
there was no docker group on my system so i read here, that i can create one and then adding the docker.socket to that group via /etc/docker/daemon.json but that file doesnt exist and i cant create it and i think i am pretty deep down the rabbit hole to just stop and ask if i am on the wrong path and did something wrong.
How can i start the containers as nonRoot without giving them to much permissions is my main question i think.
using:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
I solved this by running the container as root, chown the data dir and then su to grafana user. In docker-compose, this looks like:
grafana:
image: grafana/grafana:8.2.3
volumes:
- ./data/grafana:/var/lib/grafana
user: root
entrypoint:
- /bin/sh
- -c
- |
chown grafana /var/lib/grafana
exec su grafana -s /bin/sh -c /run.sh
I remember dealing with similar issues long ago.
I recommend
one time
mkdir data/grafana
and then
sudo docker run --volume "$PWD/data:/var/lib" -p 3000:3000 grafana/grafana:latest
because as I recall whenever you volume mount something it mounts as owned by root. There is no way to change the ownership of the mounted volume.
But you can change the ownership of files and directories inside the mounted volume and they do not have to be root.
So in my solution you are just mounting data to /var/lib, /var/lib will be owned by root but /var/lib/grafana will be owned by a regular user.
This would obviously hide the contents of any other folders in the /var/lib directory but maybe there is not anything important there. If there is then could you configure the program to look for /var/lib/grafana at some other location. For example /srv/grafana then change the invocation to
sudo docker run --env GRAFANA_DIR=/srv/grafana --volume "$PWD/data:/srv" -p 3000:3000 grafana/grafana:latest
Lastly, I do not see why it would be a security issue to run as root.
That is something I am not sure I agree with.

Jenkins Docker image, to use bind mounts or not?

I am reading through this bit of the Jenkins Docker README and there seems to be a section that contradicts itself from my current understanding.
https://github.com/jenkinsci/docker/blob/master/README.md
It seems to me that is says to NOT use a bind mount, and then says that using a bind mount is highly recommended?
NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission
issues (the user used inside the container might not have rights to
the folder on the host machine). If you really need to bind mount
jenkins_home, ensure that the directory on the host is accessible by
the jenkins user inside the container (jenkins user - uid 1000) or use
-u some_other_user parameter with docker run.
docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p
50000:50000 jenkins/jenkins:lts this will run Jenkins in detached mode
with port forwarding and volume added. You can access logs with
command 'docker logs CONTAINER_ID' in order to check first login
token. ID of container will be returned from output of command above.
Backing up data
If you bind mount in a volume - you can simply back up
that directory (which is jenkins_home) at any time.
This is highly recommended. Treat the jenkins_home directory as you would a database - in Docker you would generally put a database on
a volume.
Do you use bind mounts? Would you recommend them? Why or why not? The documentation seems to be ambiguous.
As commented, the syntax used is for a volume:
docker run -d -v jenkins_home:/var/jenkins_home -n jenkins ...
That defines a Docker volume names jenkins_homes, which will be created in:
/var/lib/docker/volumes/jenkins_home.
The idea being that you can easily backup said volume:
$ mkdir ~/backup
$ docker run --rm --volumes-from jenkins -v ~/backup:/backup ubuntu bash -c “cd /var/jenkins_home && tar cvf /backup/jenkins_home.tar .”
And reload it to another Docker instance.
This differs from bind-mounts, which does involve building a new Docker image, in order to be able to mount a local folder owner by your local user (instrad of the default user defined in the official Jenkins image: 1000:1000)
FROM jenkins/jenkins:lts-jdk11
USER root
ENV JENKINS_HOME /var/lib/jenkins
ENV COPY_REFERENCE_FILE_LOG=/var/lib/jenkins/copy_reference_file.log
RUN groupmod -g <yourId>jenkins
RUN usermod -u <yourGid> jenkins
RUN mkdir "${JENKINS_HOME}"
RUN usermod -d "${JENKINS_HOME}" jenkins
RUN chown jenkins:jenkins "${JENKINS_HOME}"
VOLUME /var/lib/jenkins
USER jenkins
Note that you have to declare a new volume (here /var/lib/jenkins), because, as seen in jenkinsci/docker issue 112, the official /var/jenkins_home path is already declared as a VOLUME in the official Jenkins image, and you cannot chown or chmod it.
The advantage of that approach would be to see the content of Jenkins home without having to use Docker.
You would run it with:
docker run -d -p 8080:8080 -p 50000:50000 \
--mount type=bind,source=/my/local/host/jenkins_home_dev1,target=/var/lib/jenkins \
--name myjenkins \
myjenkins:lts-jdk11-2.190.3
sleep 3
docker logs --follow --tail 10 myjenkins

Error while mounting host directory in Nexus Docker

I am using the following command to run my container
docker run -d -p 9001:8081 --name nexus -v /Users/user.name/dockerVolume/nexus:/nexus-data sonatype/nexus3
Container starts and fail immediately. with the following logs
mkdir: cannot create directory '../sonatype-work/nexus3/log':
Permission denied
mkdir: cannot create directory
'../sonatype-work/nexus3/tmp': Permission denied
Java HotSpot(TM)
64-Bit Server VM warning: Cannot open file
../sonatype-work/nexus3/log/jvm.log due to No such file or directory
I was following this link to set it up
I have given said permission to nexus directory.
I also tried the following SO link but that didn't help me either.
I was still getting the same error.
Docker Version 17.12.0-ce-mac47 (21805)
[EDIT]
I did made changes to the ownership of my nexus folder on my host
sudo chown -R 200 ~/dockerVolume/nexus
In my ubuntu server I had to perform:
chown -R 200:200 path/to/directory
Not only 200, but 200:200
If you have this problem trying to run Nexus3 inside of Kubernetes cluster, you should set UID with initContainers. Just add it to your spec:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chown -R 200:200 /nexus-data"]
volumeMounts:
- name: <your nexus pvc volume name>
mountPath: /nexus-data
That Dockerfile is available, in the repo sonatype/docker-nexus3.
And mounting a volume is documented as:
Mount a host directory as the volume.
This is not portable, as it relies on the directory existing with correct permissions on the host. However it can be useful in certain situations where this volume needs to be assigned to certain specific underlying storage.
$ mkdir /some/dir/nexus-data && chown -R 200 /some/dir/nexus-data
$ docker run -d -p 8081:8081 --name nexus -v /some/dir/nexus-data:/nexus-data sonatype/nexus3
So don't forget to do, before your docker run:
chown -R 200 /Users/user.name/dockerVolume/nexus

Jenkins wrong volume permissions

I have a virtual machine hosting Oracle Linux where I've installed Docker and created containers using a docker-compose file. I placed the jenkins volume under a shared folder but when starting the docker-compose up I got the following error for Jenkins :
jenkins | touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
jenkins | Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
jenkins exited with code 1
Here's the volumes declaration
volumes:
- "/media/sf_devops-workspaces/dev-tools/continuous-integration/jenkins:/var/jenkins_home"
The easy fix it to use the -u parameter. Keep in mind this will run as a root user (uid=0)
docker run -u 0 -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
As haschibaschi stated your user in the container has different userid:groupid than the user on the host.
To get around this is to start the container without the (problematic) volume mapping, then run bash on the container:
docker run -p 8080:8080 -p 50000:50000 -it jenkins bin/bash
Once inside the container's shell run the id command and you'll get results like:
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
Exit the container, go to the folder you are trying to map and run:
chown -R 1000:1000 .
With the permissions now matching, you should be able to run the original docker command with the volume mapping.
The problem is, that your user in the container has different userid:groupid as the user on the host.
you have two possibilities:
You can ensure that the user in the container has the same userid:groupid like the user on the host, which has access to the mounted volume. For this you have to adjust the user in the Dockerfile. Create a user in the dockerfile with the same userid:groupid and then switch to this user https://docs.docker.com/engine/reference/builder/#user
You can ensure that the user on the host has the same userid:groupid like the user in the container. For this, enter the container with docker exec -it <container-name> bash and show the user id id -u <username> group id id -G <username>. Change the permissions of the mounted volume to this userid:groupid.
You may be under SELinux. Running the container as privileged solved the issue for me:
sudo docker run --privileged -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
From https://docs.docker.com/engine/reference/commandline/run/#full-container-capabilities---privileged:
The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker.
As an update of #Kiem's response, using $UID to ensure container uses the same user id as the host, you can do this:
docker run -u $UID -d -p 8080:8080 -p 50000:50000 -v /data/jenkins:/var/jenkins_home jenkins/jenkins:lts
I had a similar issue with Minikube/Kubernetes just added
securityContext:
fsGroup: 1000
runAsUser: 0
under deployment -> spec -> template -> spec
This error solve using following commnad.
goto your jenkins data mount path : /media
Run following command :
cd /media
sudo chown -R ubuntu:ubuntu sf_devops-workspaces
restart jenkins docker container
docker-compose restart jenkins
Had a similar issue on MacOS, I had installed Jenkins using helm over a Minikube/Kubenetes after many intents I fixed it adding runAsUser: 0 (as root) in the values.yaml I use to deploy jenkins.
master:
usePodSecurityContext: true
runAsUser: 0
fsGroup: 0
Just be careful because that means that you will run all your commands as root.
use this command
$ chmod +757 /home/your-user/your-jenkins-data
first of all you can verify your current user using echo $USER command
and after that you can mention who is the user in the Dockerfile like bellow (in my case user is root)
screenshot
I had same issue it got resolved after disabling the SELINUX.
It's not recommended to disable the SELINUX so install custom semodule and enable it.
It works. Only changing the permissions won't work on CentOS 7.

Cannot call chown inside Docker container (Docker for Windows)

I am attempting to use the official Mongo dockerfile to boot up a database, I am using the -v command to map a local directory to /data inside the container.
As part of the Dockerfile, it attempts to chown this directory to the user mongodb:
RUN mkdir -p /data/db /data/configdb \
&& chown -R mongodb:mongodb /data/db /data/configdb
VOLUME /data/db /data/configdb
However, this fails with the following command:
chown: changing ownership of '/data/db': Permission denied
What I am doing wrong here? I cannot find any documentation around this - surely the container should have full permissions to the mapped directory, as it was explicitly passed in the docker run command:
docker run -d --name mongocontainer -v R:\mongodata:/data/db -p 3000:27017 mongo:latest
You have similar issues illustrating the same error message in mongo issues 68 or issue 74
The host machine volume directory cannot be under /Users (or ~). Try:
docker run --name mongo -p 27017:27017 -v /var/lib/boot2docker/my-mongodb-data/:/data/db -d mongo --storageEngine wiredTiger
The PR 470 adds:
WARNING: because MongoDB uses memory mapped files it is not possible to use it through vboxsf to your host (vbox bug).
VirtualBox shared folders are not supported by MongoDB (see docs.mongodb.org and related jira.mongodb.org bug).
This means that it is not possible with the default setup using Docker Toolbox to run a MongoDB container with the data directory mapped to the host.

Resources