Openshift 4 - Mkdir command gets permission denied error - docker

I have setup a deployment for an image that doesn't specify any user to run on. When the image starts, it tries to create a directory at /data/cache and it encounter permission denied error.
I try to create the directory from the terminal in pods, and encounter the same issue:
$ whoami
1000910000
$ mkdir /data/cache
mkdir: cannot create directory ‘/data/cache’: Permission denied
Found this but it requires the image to be run as a specific user, and i can't change the Dockerfile. Any way to allow the image write access to /data?
Thank you

This is due to how OpenShift create/manage the images as every time you deploy, it creates a random user ID.
You should check how to support arbitrary user ids:
https://docs.openshift.com/container-platform/4.7/openshift_images/create-images.html
Support arbitrary user ids
By default, OpenShift Container Platform runs containers using an
arbitrarily assigned user ID. This provides additional security
against processes escaping the container due to a container engine
vulnerability and thereby achieving escalated permissions on the host
node.
For an image to support running as an arbitrary user, directories and
files that are written to by processes in the image must be owned by
the root group and be read/writable by that group. Files to be
executed must also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file
permissions to allow users in the root group to access them in the
built image:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
Because the container user is always a member of the root group, the
container user can read and write these files.
So you should really try to bend it following these rules.

If you cannot change the container itself, then mounting an emptyDir directory in this place could be an option.
Add it like so to the Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
labels:
app: example
spec:
...
spec:
containers:
- name: example
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data/cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}

Related

How to attache a volume to kubernetes pod container like in docker?

I am new to Kubernetes but familiar with docker.
Docker Use Case
Usually, when I want to persist data I just create a volume with a name then attach it to the container, and even when I stop it then start another one with the same image I can see the data persisting.
So this is what i used to do in docker
docker volume create nginx-storage
run -it --rm -v nginx-storage:/usr/share/nginx/html -p 80:80 nginx:1.14.2
then I:
Create a new html file in /usr/share/nginx/html
Stop container
Run the same docker run command again (will create another container with same volume)
html file exists (which means data persisted in that volume)
Kubernetes Use Case
Usually, when I work with Kubernetes volumes I specify a PVC (PersistentVolumeClaim) and PV (PersistentVolume) using hostPath which will bind mount directory or a file from the host machine to the container.
what I want to do is reproduce the same behavior specified in the previous example (Docker Use Case) so how can I do that? Is Kubernetes creating volumes process is different from Docker? and if possible providing a YAML file would help me understand.
To a first approximation, you can't (portably) do this. Build your content into the image instead.
There are two big practical problems, especially if you're running a production-oriented system on a cloud-hosted Kubernetes:
If you look at the list of PersistentVolume types, very few of them can be used in ReadWriteMany mode. It's very easy to get, say, an AWSElasticBlockStore volume that can only be used on one node at a time, and something like this will probably be the default cluster setup. That means you'll have trouble running multiple pod replicas serving the same (static) data.
Once you do get a volume, it's very hard to edit its contents. Consider the aforementioned EBS volume: you can't edit it without being logged into the node on which it's mounted, which means finding the node, convincing your security team that you can have root access over your entire cluster, enabling remote logins, and then editing the file. That's not something that's actually possible in most non-developer Kubernetes setups.
The thing you should do instead is build your static content into a custom image. An image registry of some sort is all but required to run Kubernetes and you can push this static content server into the same registry as your application code.
FROM nginx:1.14.2
COPY . /usr/share/nginx/html
# Base image has a working CMD, no need to repeat it
Then in your deployment spec, set image: registry.example.com/nginx-frontend:20220209 or whatever you've chosen to name this build of this image, and do not use volumes at all. You'd deploy this the same way you deploy other parts of your application; you could use Helm or Kustomize to simplify the update process.
Correspondingly, in the plain-Docker case, I'd avoid volumes here. You don't discuss how files get into the nginx-storage named volume; if you're using imperative commands like docker cp or debugging tools like docker exec, those approaches are hard to script and are intrinsically local to the system they're running on. It's not easy to copy a Docker volume from one place to another. Images, though, can be pushed and pulled through a registry.
I managed to do that by creating a PVC only this is how I did it (with an Nginx image):
nginx-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
nginx-deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template: # template for the pods
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Once I run kubectl apply on the PVC then on the deployment going to localhost:30080 will show 404 not found page means that all data in the /usr/share/nginx/html was deleted once the container gets started and that's because it's bind mounting a dir from the k8s cluster node to that container as a volume:
/usr/share/nginx/html <-- dir in volume
/var/lib/k8s-pvs/nginx2-data/pvc-9ba811b0-e6b6-4564-b6c9-4a32d04b974f <-- dir from node (was automatically created)
I tried adding a new file into that container in the html dir as a new index.html file, then deleted the container, a new container was created by the pod and checking localhost:30080 worked with the newly created home page
I tried deleting the deployment and reapplying it (without deleting the PVC) checked localhost:30080 and everything still persists.
An alternative solution specified in the comments kubernetes.io/docs/tasks/configure-pod-container/… by
larsks

Copy files from container to host while inside the container

I'm working on automation pipeline using Kubernetes and Jenkins. All my commands are running from inside the jnlp-slave container. The jnlp-slave is deployed onto a worker node by Kubernetes. I have -v /var/run/docker.sock on my jnlp-slave so it can run docker commands from inside the container.
Issue:
I'm trying to copy files inside the jnlp-slave container to the host machine (worker node), but the command below does not copy files to host machine, but to destination of the container itself:
docker cp <container_id>:/home/jenkins/workspace /home/jenkins/workspace
Clarification:
Since the container is executing the command, files located inside the container is copied to the destination path which is also inside the container.
Normally, docker commands are executed on the host machine. Therefore, the docker cp can be used to copy files from container to host and from host to container. But in this case, the docker cp is executed from inside the container.
How can I make the container to copy files to the host machine without running docker commands on the host? Is there a command which the container can run to copy files to the host?
P.S. I've tried mounting volume on the host. But the files only can be shared from the host to the container and not the other way around. Any help is appreciated, thanks.
As suggested in comments, you should probably redesign entirely your solution.
But let's summarize what you currently have and try to figure out what you can do with it and not make your solution even more complicated at the same time.
What I did was copy the files from jnlp-slave container to the other
containers.
Copying files from one container to all others is a bit an overkill (btw. How many of them do you place in one Pod ?)
Maybe your containers don't have to be deployed withing the same Pod ? If for some reason this is currently impossible, maybe at least the content of the /home/jenkins/workspace directory shouldn't be integral part of your docker image as it is now ? If this is also impossible, you have no other remedy than to copy somehow those files from the original container based on that image to the shared location which is also available for other containers existing within the same Pod.
emptyDir, mentioned in comments might be an option allowing you to achieve it. However keep in mind that the data stored in an emptyDir volume is not persistent in any way and is deleted along with the deletion of the pod. If you're ok with this fact, it may be a solution for you.
If your data is originally part of your image, it should be first transferred to your emptyDir volume as by its very definition it is initially empty. Simply mounting it under /home/jenkins/workspace won't make the data originally available in this directory in your container automagically appear in the emptyDir volume. From the moment you mount your emptyDir under /home/jenkins/workspace it will contain the content of emptyDir i.e. nothing.
Therefore you need to pre-populate it somehow and one of the available solutions to do that is using an initContainer. As your data is originally the integral part of your docker image, you must use the same image also for the initContainer.
Your deployment may look similar to the one below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-deployment
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
initContainers:
- name: pre-populate-empty-dir
image: <image-1>
command: ['sh', '-c', 'cp -a /home/jenkins/workspace/* /mnt/empty-dir-content/']
volumeMounts:
- name: cache-volume
mountPath: "/mnt/empty-dir-content/"
containers:
- name: app-container-1
image: <image-1>
ports:
- containerPort: 8081
volumeMounts:
- name: cache-volume
mountPath: "/home/jenkins/workspace"
- name: app-container-2
image: <image-2>
ports:
- containerPort: 8082
volumeMounts:
- name: cache-volume
mountPath: "/home/jenkins/workspace"
- name: app-container-3
image: <image-3>
ports:
- containerPort: 8083
volumeMounts:
- name: cache-volume
mountPath: "/home/jenkins/workspace"
volumes:
- name: cache-volume
emptyDir: {}
Once your data has been copied to an emptyDir volume by the initContainer, it will be available for all the main containers and can be mounted under the same path by each of them.

Access Kubernetes pod's log files from inside the pod?

I'm currently migrating a legacy server to Kubernetes, and I found that kubectl or dashboard only shows the latest log file, not the older versions. In order to access the old files, I have to ssh to the node machine and search for it.
In addition to being a hassle, my team wants to restrict access to the node machines themselves, because they will be running pods from many different teams and unrestricted access could be a security issue.
So my question is: can I configure Kubernetes (or a Docker image) so that these old (rotated) log files are stored in some directory accessible from inside the pod itself?
Of course, in a pinch, I could probably just execute something like run_server.sh | tee /var/log/my-own.log when the pod starts... but then, to do it correctly, I'll have to add the whole logfile rotation functionality, basically duplicating what Kubernetes is already doing.
So there are a couple of ways to and scenarios for this. If you are just interested in the log of the same pod from before last restart, you can use the --previous flag to look at logs:
kubectl logs -f <pod-name-xyz> --previous
But since in your case, you are interested in looking at log files beyond one rotation, here is how you can do it. Add a sidecar container to your application container:
volumeMounts:
- name: varlog
mountPath: /tmp/logs
- name: log-helper
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/*.log']
volumeMounts:
- name: varlog
mountPath: /tmp/logs
volumes:
- name: varlog
hpostPath: /var/log
This will allow the directory which has all logs from /var/log directory from host to /tmp/log inside the container and the command will ensure that content of all files is flushed. Now you can run:
kubectl logs <pod-name-abc> -c count-log-1
This solution does away with SSH access, but still needs access to kubectl and adding a sidecar container. I still think this is a bad solution and you consider of one of the options from the cluster level logging architecture documentation of Kubernetes such as 1 or 2

how to inspect the content of persistent volume by kubernetes on azure cloud service

I have packed the software to a container. I need to put the container to cluster by Azure Container Service. The software have outputs of an directory /src/data/, I want to access the content of the whole directory.
After searching, I have to solution.
use Blob Storage on azure, but then after searching, I can't find the executable method.
use Persistent Volume, but all the official documentation of azure and pages I found is about Persistent Volume itself, not about how to inspect it.
I need to access and manage my output directory on Azure cluster. In other words, I need a savior.
As I've explained here and here, in general, if you can interact with the cluster using kubectl, you can create a pod/container, mount the PVC inside, and use the container's tools to, e.g., ls the contents. If you need more advanced editing tools, replace the container image busybox with a custom one.
Create the inspector pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pvc-inspector
spec:
containers:
- image: busybox
name: pvc-inspector
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /pvc
name: pvc-mount
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: YOUR_CLAIM_NAME_HERE
EOF
Inspect the contents
kubectl exec -it pvc-inspector -- sh
$ ls /pvc
Clean Up
kubectl delete pod pvc-inspector

kubernetes mysql chown operation not permitted

I am currently experimenting with Kubernetes and have installed a small cluster on ESX infra I had running here locally. I installed two slave nodes with a master node using Project Atomic with Fedora. The cluster is all installed fine and seems to be running. However I first want to get a MySQL container up and running, but no matter what I try i cannot get it to run.
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 0.5
image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: myPassw0rd
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
nfs:
server: 10.0.0.2
path: "/export/mysql"
For the volume I already tried all kinds of solutions, I tried using persistent volume with and without claim. I tried using host volume and emptyDir, but I always end up with this error when the container starts:
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
I must be doing something stupid, but no idea what to do here?
Ok it seems I can answer my own question, the problem was lying in the NFS share that was being used as the persistent volume. I had it set to 'squash_all' in the export but it needs to have a 'no_root_squash' to allow root in case of docker container to chown on the nfs bound volume.
I solved this problem other way. I had an argument with system administrator regarding allowing root access to exported NFS directory on NFS client machine(s). He has valid security reasons for not setting it such reason one and reason two -read no_root_squash section.
At the end I didn't have to request no_root_squash. This is what I did to make mysql pod running without compromising security.
Step 1
Exec into pod's container runing mysql image. kubectl exec -it -n <namespace> <mysql_pod> -- bash
Step 2
Obtain uid (999) and gid (999) of mysql user. cat /etc/passwd | tail -n or id mysql. mysql username can be found in 2nd instruction specified in Dockerfile
Step 3
Change permission to the directory that holds content of /var/lib/mysql of docker container. This is more likely the directory specified in your PersistentVolume. This command is executed on host machine, not in the Pod!!!
# PerisistentVolume
...
nfs:
path: /path/to/app/mysql/directory
server: nfs-server
Run chown 999:999 -r /path/to/app/mysql/directory
Step 4
Finally after everything is set, deploy your MySQL Pod (deployment, replica set or whatever you are using).
This can also be resolved by having mysql container run with the same uid that owns the nfs volume using Kubernetes' securityContext definition.
containers:
- name: mysql
image: ...
securityContext:
runAsUser: 2015
allowPrivilegeEscalation: false
Here the 2015 should be replaced with whatever ownership is on the nfs path.

Resources