kubernetes mysql chown operation not permitted - docker

I am currently experimenting with Kubernetes and have installed a small cluster on ESX infra I had running here locally. I installed two slave nodes with a master node using Project Atomic with Fedora. The cluster is all installed fine and seems to be running. However I first want to get a MySQL container up and running, but no matter what I try i cannot get it to run.
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 0.5
image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: myPassw0rd
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
nfs:
server: 10.0.0.2
path: "/export/mysql"
For the volume I already tried all kinds of solutions, I tried using persistent volume with and without claim. I tried using host volume and emptyDir, but I always end up with this error when the container starts:
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
I must be doing something stupid, but no idea what to do here?

Ok it seems I can answer my own question, the problem was lying in the NFS share that was being used as the persistent volume. I had it set to 'squash_all' in the export but it needs to have a 'no_root_squash' to allow root in case of docker container to chown on the nfs bound volume.

I solved this problem other way. I had an argument with system administrator regarding allowing root access to exported NFS directory on NFS client machine(s). He has valid security reasons for not setting it such reason one and reason two -read no_root_squash section.
At the end I didn't have to request no_root_squash. This is what I did to make mysql pod running without compromising security.
Step 1
Exec into pod's container runing mysql image. kubectl exec -it -n <namespace> <mysql_pod> -- bash
Step 2
Obtain uid (999) and gid (999) of mysql user. cat /etc/passwd | tail -n or id mysql. mysql username can be found in 2nd instruction specified in Dockerfile
Step 3
Change permission to the directory that holds content of /var/lib/mysql of docker container. This is more likely the directory specified in your PersistentVolume. This command is executed on host machine, not in the Pod!!!
# PerisistentVolume
...
nfs:
path: /path/to/app/mysql/directory
server: nfs-server
Run chown 999:999 -r /path/to/app/mysql/directory
Step 4
Finally after everything is set, deploy your MySQL Pod (deployment, replica set or whatever you are using).

This can also be resolved by having mysql container run with the same uid that owns the nfs volume using Kubernetes' securityContext definition.
containers:
- name: mysql
image: ...
securityContext:
runAsUser: 2015
allowPrivilegeEscalation: false
Here the 2015 should be replaced with whatever ownership is on the nfs path.

Related

Openshift 4 - Mkdir command gets permission denied error

I have setup a deployment for an image that doesn't specify any user to run on. When the image starts, it tries to create a directory at /data/cache and it encounter permission denied error.
I try to create the directory from the terminal in pods, and encounter the same issue:
$ whoami
1000910000
$ mkdir /data/cache
mkdir: cannot create directory ‘/data/cache’: Permission denied
Found this but it requires the image to be run as a specific user, and i can't change the Dockerfile. Any way to allow the image write access to /data?
Thank you
This is due to how OpenShift create/manage the images as every time you deploy, it creates a random user ID.
You should check how to support arbitrary user ids:
https://docs.openshift.com/container-platform/4.7/openshift_images/create-images.html
Support arbitrary user ids
By default, OpenShift Container Platform runs containers using an
arbitrarily assigned user ID. This provides additional security
against processes escaping the container due to a container engine
vulnerability and thereby achieving escalated permissions on the host
node.
For an image to support running as an arbitrary user, directories and
files that are written to by processes in the image must be owned by
the root group and be read/writable by that group. Files to be
executed must also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file
permissions to allow users in the root group to access them in the
built image:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
Because the container user is always a member of the root group, the
container user can read and write these files.
So you should really try to bend it following these rules.
If you cannot change the container itself, then mounting an emptyDir directory in this place could be an option.
Add it like so to the Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
labels:
app: example
spec:
...
spec:
containers:
- name: example
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data/cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}

How can I set permissions on an attached Volume within a Docker container?

I'm trying to deploy the Prometheus docker container with persistent data via an NFS volume using a Docker named volume. I'm deploying with Ansible, so I'll post Ansible config, but I've executed the same using Docker's CLI commands and the issue presents in that case as well.
When I deploy the container and review the containers docker logs, I see that /etc/prometheus is shared appropriately and attached to the container. However, /prometheus, which is where the container stores relevant DB and metrics, gives permission denied.
According to this answer, /prometheus is required to be chowned to nobody. This doesn't seem to happen within the container upon startup.
Here's the volume creation from my Ansible role:
- name: "Creates named docker volume"
docker_volume:
volume_name: prometheus_persist
state: present
driver_options:
type: nfs
o: "addr={{ nfs_server }},rw,nolock"
device: ":{{ prometheus_nfs_path }}"
Which is equivalent to this Docker CLI command:
docker volume create -d local -o type=nfs -o o=addr={{ nfs_server }},rw -o device=:{{ prometheus_nfs_path }} prometheus_persist
Here's my container deployment stanza
- name: "Deploy prometheus container"
docker_container:
name: prometheus
# hostname: prometheus
image: prom/prometheus
restart_policy: always
state: started
ports: 9090:9090
user: ansible:docker
volumes:
- "{{ prometheus_config_path }}:/etc/prometheus"
mounts:
- source: prometheus_persist
target: /prometheus
read_only: no
type: volume
comparisons:
env: strict
Which is equivalent to this Docker CLI command:
docker run -v prometheus_persist:/prometheus -v "{{ prometheus_config_path }}:/etc/prometheus" -p 9090:9090 --name prometheus prom/prometheus
Again, the container logs upon deployment indicate permission denied on /prometheus. I've tested by mounting the prometheus_persist named volume on a generic Ubuntu container, it mounts fine, and I can touch files within. Any advice on how to resolve this?
This turned out to be an issue with NFS squashing. The NFS exporter in my case is an old Synology, which doesn't allow no_root_squash to be set. However, mapping all users to admin on the NFS share resolved the issue.

Copy files from container to host while inside the container

I'm working on automation pipeline using Kubernetes and Jenkins. All my commands are running from inside the jnlp-slave container. The jnlp-slave is deployed onto a worker node by Kubernetes. I have -v /var/run/docker.sock on my jnlp-slave so it can run docker commands from inside the container.
Issue:
I'm trying to copy files inside the jnlp-slave container to the host machine (worker node), but the command below does not copy files to host machine, but to destination of the container itself:
docker cp <container_id>:/home/jenkins/workspace /home/jenkins/workspace
Clarification:
Since the container is executing the command, files located inside the container is copied to the destination path which is also inside the container.
Normally, docker commands are executed on the host machine. Therefore, the docker cp can be used to copy files from container to host and from host to container. But in this case, the docker cp is executed from inside the container.
How can I make the container to copy files to the host machine without running docker commands on the host? Is there a command which the container can run to copy files to the host?
P.S. I've tried mounting volume on the host. But the files only can be shared from the host to the container and not the other way around. Any help is appreciated, thanks.
As suggested in comments, you should probably redesign entirely your solution.
But let's summarize what you currently have and try to figure out what you can do with it and not make your solution even more complicated at the same time.
What I did was copy the files from jnlp-slave container to the other
containers.
Copying files from one container to all others is a bit an overkill (btw. How many of them do you place in one Pod ?)
Maybe your containers don't have to be deployed withing the same Pod ? If for some reason this is currently impossible, maybe at least the content of the /home/jenkins/workspace directory shouldn't be integral part of your docker image as it is now ? If this is also impossible, you have no other remedy than to copy somehow those files from the original container based on that image to the shared location which is also available for other containers existing within the same Pod.
emptyDir, mentioned in comments might be an option allowing you to achieve it. However keep in mind that the data stored in an emptyDir volume is not persistent in any way and is deleted along with the deletion of the pod. If you're ok with this fact, it may be a solution for you.
If your data is originally part of your image, it should be first transferred to your emptyDir volume as by its very definition it is initially empty. Simply mounting it under /home/jenkins/workspace won't make the data originally available in this directory in your container automagically appear in the emptyDir volume. From the moment you mount your emptyDir under /home/jenkins/workspace it will contain the content of emptyDir i.e. nothing.
Therefore you need to pre-populate it somehow and one of the available solutions to do that is using an initContainer. As your data is originally the integral part of your docker image, you must use the same image also for the initContainer.
Your deployment may look similar to the one below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-deployment
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
initContainers:
- name: pre-populate-empty-dir
image: <image-1>
command: ['sh', '-c', 'cp -a /home/jenkins/workspace/* /mnt/empty-dir-content/']
volumeMounts:
- name: cache-volume
mountPath: "/mnt/empty-dir-content/"
containers:
- name: app-container-1
image: <image-1>
ports:
- containerPort: 8081
volumeMounts:
- name: cache-volume
mountPath: "/home/jenkins/workspace"
- name: app-container-2
image: <image-2>
ports:
- containerPort: 8082
volumeMounts:
- name: cache-volume
mountPath: "/home/jenkins/workspace"
- name: app-container-3
image: <image-3>
ports:
- containerPort: 8083
volumeMounts:
- name: cache-volume
mountPath: "/home/jenkins/workspace"
volumes:
- name: cache-volume
emptyDir: {}
Once your data has been copied to an emptyDir volume by the initContainer, it will be available for all the main containers and can be mounted under the same path by each of them.

Kubernetes: how to run application in the container with root privileges

I setup kubernetes with master and node on the same hardware (ubuntu 18) using this tutorial.
Kubernetes 1.15.3
docker 19.03.2
The container I created runs an emulation software that needs root privileges with write access to /proc/sys/kernel directory. When kubernetes start the container I get an error inside the service script /etc/init.d/myservicescript indicates that it can't write to /proc/sys/kernel/xxx. The container runs on ubuntu 14.
I tried to set the "runAsUser: 0" in the pod's yaml file
I tried to set "USER 0" in the Dockerfile
Neither work. Any suggestion on how to get this working?
Changing the user inside the container does not give you any privilege on the host. In order to get elevated privilege, you must set privileged: true in the security context.
For example:
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "999"
securityContext:
privileged: true

how to inspect the content of persistent volume by kubernetes on azure cloud service

I have packed the software to a container. I need to put the container to cluster by Azure Container Service. The software have outputs of an directory /src/data/, I want to access the content of the whole directory.
After searching, I have to solution.
use Blob Storage on azure, but then after searching, I can't find the executable method.
use Persistent Volume, but all the official documentation of azure and pages I found is about Persistent Volume itself, not about how to inspect it.
I need to access and manage my output directory on Azure cluster. In other words, I need a savior.
As I've explained here and here, in general, if you can interact with the cluster using kubectl, you can create a pod/container, mount the PVC inside, and use the container's tools to, e.g., ls the contents. If you need more advanced editing tools, replace the container image busybox with a custom one.
Create the inspector pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pvc-inspector
spec:
containers:
- image: busybox
name: pvc-inspector
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /pvc
name: pvc-mount
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: YOUR_CLAIM_NAME_HERE
EOF
Inspect the contents
kubectl exec -it pvc-inspector -- sh
$ ls /pvc
Clean Up
kubectl delete pod pvc-inspector

Resources