The spring boot application is deployed on openshift 4. This application needs to create a file on the nfs-share.
The openshift container has configured a volume mount on the type NFS.
The container on openshift creates a pod with random userid as
sh-4.2$ id
uid=1031290500(1031290500) gid=0(root) groups=0(root),1031290500
The mount point is /nfs/abc
sh-4.2$ ls -la /nfs/
ls: cannot access /nfs/abc: Permission denied
total 0
drwxr-xr-x. 1 root root 29 Nov 25 09:34 .
drwxr-xr-x. 1 root root 50 Nov 25 10:09 ..
d?????????? ? ? ? ? ? abc
on the docker image I created a user "technical" with uid= gid=48760 as shown below.
FROM quay.repository
MAINTAINER developer
LABEL description="abc image" \
name="abc" \
version="1.0"
ARG APP_HOME=/opt/app
ARG PORT=8080
ENV JAR=app.jar \
SPRING_PROFILES_ACTIVE=default \
JAVA_OPTS=""
RUN mkdir $APP_HOME
ADD $JAR $APP_HOME/
WORKDIR $APP_HOME
EXPOSE $PORT
ENTRYPOINT java $JAVA_OPTS -Dspring.profiles.active=$SPRING_PROFILES_ACTIVE -jar $JAR
my deployment config file is as shown below
spec:
volumes:
- name: bad-import-file
persistentVolumeClaim:
claimName: nfs-test-pvc
containers:
- resources:
limits:
cpu: '1'
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
terminationMessagePath: /dev/termination-log
name: abc
env:
- name: SPRING_PROFILES_ACTIVE
valueFrom:
configMapKeyRef:
name: abc-configmap
key: spring.profiles.active
- name: DB_URL
valueFrom:
configMapKeyRef:
name: abc-configmap
key: db.url
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: abc-configmap
key: db.username
- name: BAD_IMPORT_PATH
valueFrom:
configMapKeyRef:
name: abc-configmap
key: bad.import.path
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: abc-secret
key: db.password
ports:
- containerPort: 8080
protocol: TCP
imagePullPolicy: IfNotPresent
volumeMounts:
- name: bad-import-file
mountPath: /nfs/abc
dnsPolicy: ClusterFirst
securityContext:
runAsGroup: 44337
runAsNonRoot: true
supplementalGroups:
- 44337
the PV request is as follows
apiVersion: v1
kind: PersistentVolume
metadata:
name: abc-tuc-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: classic-nfs
mountOptions:
- hard
- nfsvers=3
nfs:
path: /tm03v06_vol3014
server: tm03v06cl02.jit.abc.com
readOnly: false
Now the openshift user has id
sh-4.2$ id
uid=1031290500(1031290500) gid=44337(technical) groups=44337(technical),1031290500
RECENT UPDATE
Just to be clear with the problem, Below I have two commands from the same pod terminal,
sh-4.2$ cd /nfs/
sh-4.2$ ls -la (The first command I tried immediately after pod creation.)
total 8
drwxr-xr-x. 1 root root 29 Nov 29 08:20 .
drwxr-xr-x. 1 root root 50 Nov 30 08:19 ..
drwxrwx---. 14 technical technical 8192 Nov 28 19:06 abc
sh-4.2$ ls -la(few seconds later on the same pod terminal)
ls: cannot access abc: Permission denied
total 0
drwxr-xr-x. 1 root root 29 Nov 29 08:20 .
drwxr-xr-x. 1 root root 50 Nov 30 08:19 ..
d?????????? ? ? ? ? ? abc
So the problem is that I see these question marks(???) on the mount point.
The mounting is working correctly but I cannot access this /nfs/abc directory and I see this ????? for some reason
UPDATE
sh-4.2$ ls -la /nfs/abc/
ls: cannot open directory /nfs/abc/: Stale file handle
sh-4.2$ ls -la /nfs/abc/ (after few seconds on the same pod terminal)
ls: cannot access /nfs/abc/: Permission denied
Could this STALE FILE HANDLE be the reason for this issue?
TL;DR
You can use the anyuid security context to run the pod to avoid having OpenShift assign an arbitrary UID, and set the permissions on the volume to the known UID of the user.
OpenShift will override the user ID the image itself may specify that it should run as:
The user ID isn't actually entirely random, but is an assigned user ID which is unique to your project. In fact, your project is assigned a range of user IDs that applications can be run as. The set of user IDs will not overlap with other projects. You can see what range is assigned to a project by running oc describe on the project.
The purpose of assigning each project a distinct range of user IDs is so that in a multitenant environment, applications from different projects never run as the same user ID. When using persistent storage, any files created by applications will also have different ownership in the file system.
... this is a blessing and a curse, when using shared persistent volume claims for example (e.g. PVC's mounted in ReadWriteMany with multiple pods that read / write data - files created by one pod won't be accessible by the other pod because of the incorrect file ownership and permissions).
One way to get around this issue is using the anyuid security context which "provides all features of the restricted SCC, but allows users to run with any UID and any GID".
When using the anyuid security context, we know the user and group ID's the pod(s) are going to run as, and we can set the permissions on the shared volume in advance. For example, where all pods run with the restricted security context by default:
When running the pod with the anyuid security context, OpenShift doesn't assign an arbitrary UID from the range of UID's allocated for the namespace:
This is just for example, but an image that is built with a non-root user with a fixed UID and GID (e.g. 1000:1000) would run in OpenShift as that user, files would be created with the ownership of that user (e.g. 1000:1000), permissions can be set on the PVC to the known UID and GID of the user set to run the service. For example, we can create a new PVC:
cat <<EOF |kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
namespace: k8s
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: portworx-shared-sc
EOF
... then mount it in a pod:
kubectl run -i --rm --tty ansible --image=lazybit/ansible:v4.0.0 --restart=Never -n k8s --overrides='
{
"apiVersion": "v1",
"kind": "Pod",
"spec": {
"serviceAccountName": "default",
"containers": [
{
"name": "nginx",
"imagePullPolicy": "Always",
"image": "lazybit/ansible:v4.0.0",
"command": ["ash"],
"stdin": true,
"stdinOnce": true,
"tty": true,
"env": [
{
"name": "POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
}
],
"volumeMounts": [
{
"mountPath": "/data",
"name": "data"
}
]
}
],
"volumes": [
{
"name": "data",
"persistentVolumeClaim": {
"claimName": "data"
}
}
]
}
}'
... and create files in the PVC as the USER set in the Dockerfile.
Related
We are facing strange issue with EKS Fargate Pods. We want to push logs to cloudwatch with sidecar fluent-bit container and for that we are mounting the separately created /logs/boot and /logs/access folders on both the containers with emptyDir: {} type. But somehow the access folder is getting deleted. When we tested this setup in local docker it produced desired results and things were working fine but not when deployed in the EKS fargate. Below is our manifest files
Dockerfile
FROM anapsix/alpine-java:8u201b09_server-jre_nashorn
ARG LOG_DIR=/logs
# Install base packages
RUN apk update
RUN apk upgrade
# RUN apk add ca-certificates && update-ca-certificates
# Dynamically set the JAVA_HOME path
RUN export JAVA_HOME="$(dirname $(dirname $(readlink -f $(which java))))" && echo $JAVA_HOME
# Add Curl
RUN apk --no-cache add curl
RUN mkdir -p $LOG_DIR/boot $LOG_DIR/access
RUN chmod -R 0777 $LOG_DIR/*
# Add metadata to the image to describe which port the container is listening on at runtime.
# Change TimeZone
RUN apk add --update tzdata
ENV TZ="Asia/Kolkata"
# Clean APK cache
RUN rm -rf /var/cache/apk/*
# Setting JAVA HOME
ENV JAVA_HOME=/opt/jdk
# Copy all files and folders
COPY . .
RUN rm -rf /opt/jdk/jre/lib/security/cacerts
COPY cacerts /opt/jdk/jre/lib/security/cacerts
COPY standalone.xml /jboss-eap-6.4-integration/standalone/configuration/
# Set the working directory.
WORKDIR /jboss-eap-6.4-integration/bin
EXPOSE 8177
CMD ["./erctl"]
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: vinintegrator
namespace: eretail
labels:
app: vinintegrator
pod: fargate
spec:
selector:
matchLabels:
app: vinintegrator
pod: fargate
replicas: 2
template:
metadata:
labels:
app: vinintegrator
pod: fargate
spec:
securityContext:
fsGroup: 0
serviceAccount: eretail
containers:
- name: vinintegrator
imagePullPolicy: IfNotPresent
image: 653580443710.dkr.ecr.ap-southeast-1.amazonaws.com/vinintegrator-service:latest
resources:
limits:
memory: "7629Mi"
cpu: "1.5"
requests:
memory: "5435Mi"
cpu: "750m"
ports:
- containerPort: 8177
protocol: TCP
# securityContext:
# runAsUser: 506
# runAsGroup: 506
volumeMounts:
- mountPath: /jboss-eap-6.4-integration/bin
name: bin
- mountPath: /logs
name: logs
- name: fluent-bit
image: 657281243710.dkr.ecr.ap-southeast-1.amazonaws.com/fluent-bit:latest
imagePullPolicy: IfNotPresent
env:
- name: HOST_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
limits:
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
- name: logs
mountPath: /logs
readOnly: true
volumes:
- name: fluent-bit-config
configMap:
name: fluent-bit-config
- name: logs
emptyDir: {}
- name: bin
persistentVolumeClaim:
claimName: vinintegrator-pvc
Below is the /logs folder ownership and permission. Please notice the 's' in drwxrwsrwx
drwxrwsrwx 3 root root 4096 Oct 1 11:50 logs
Below is the content inside logs folder. Please notice the access folder is not created or deleted.
/logs # ls -lrt
total 4
drwxr-sr-x 2 root root 4096 Oct 1 11:50 boot
/logs #
Below is the configmap of Fluent-Bit
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: eretail
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
#INCLUDE application-log.conf
application-log.conf: |
[INPUT]
Name tail
Path /logs/boot/*.log
Tag boot
[INPUT]
Name tail
Path /logs/access/*.log
Tag access
[OUTPUT]
Name cloudwatch_logs
Match *boot*
region ap-southeast-1
log_group_name eks-fluent-bit
log_stream_prefix boot-log-
auto_create_group On
[OUTPUT]
Name cloudwatch_logs
Match *access*
region ap-southeast-1
log_group_name eks-fluent-bit
log_stream_prefix access-log-
auto_create_group On
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
Below is error log of Fluent-bit container
AWS for Fluent Bit Container Image Version 2.14.0
Fluent Bit v1.7.4
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
[2021/10/01 06:20:33] [ info] [engine] started (pid=1)
[2021/10/01 06:20:33] [ info] [storage] version=1.1.1, initializing...
[2021/10/01 06:20:33] [ info] [storage] in-memory
[2021/10/01 06:20:33] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/10/01 06:20:33] [error] [input:tail:tail.1] read error, check permissions: /logs/access/*.log
[2021/10/01 06:20:33] [ warn] [input:tail:tail.1] error scanning path: /logs/access/*.log
[2021/10/01 06:20:38] [error] [net] connection #33 timeout after 5 seconds to: 169.254.169.254:80
[2021/10/01 06:20:38] [error] [net] socket #33 could not connect to 169.254.169.254:80
Suggest remove the following from your Dockerfile:
RUN mkdir -p $LOG_DIR/boot $LOG_DIR/access
RUN chmod -R 0777 $LOG_DIR/*
Use the following method to setup the log directories and permissions:
apiVersion: v1
kind: Pod # Deployment
metadata:
name: busy
labels:
app: busy
spec:
volumes:
- name: logs # Shared folder with ephemeral storage
emptyDir: {}
initContainers: # Setup your log directory here
- name: setup
image: busybox
command: ["bin/ash", "-c"]
args:
- >
mkdir -p /logs/boot /logs/access;
chmod -R 777 /logs
volumeMounts:
- name: logs
mountPath: /logs
containers:
- name: app # Run your application and logs to the directories
image: busybox
command: ["bin/ash","-c"]
args:
- >
while :; do echo "$(date): $(uname -r)" | tee -a /logs/boot/boot.log /logs/access/access.log; sleep 1; done
volumeMounts:
- name: logs
mountPath: /logs
- name: logger # Any logger that you like
image: busybox
command: ["bin/ash","-c"]
args: # tail the app logs, forward to CW etc...
- >
sleep 5;
tail -f /logs/boot/boot.log /logs/access/access.log
volumeMounts:
- name: logs
mountPath: /logs
The snippet runs on Fargate as well, run kubectl logs -f busy -c logger to see the tailing. In real world, the "app" is your java app, "logger" is any log agent you desired. Note Fargate has native logging capability using AWS Fluent-bit, you do not need to run AWS Fluent-bit as sidecar.
I'm using the following tech:
helm
argocd
k8s
I created a secret:
╰ kubectl create secret generic my-secret --from-file=my-secret=/Users/superduper/project/src/main/resources/config-file.json --dry-run=client -o yaml
apiVersion: v1
data:
my-secret: <content>
kind: Secret
metadata:
creationTimestamp: null
name: my-secret
I then added the secret to my pod via a volume mount:
volumeMounts:
- mountPath: "/etc/config"
name: config
readOnly: true
volumes:
- name: config
secret:
secretName: my-secret
but the problem is that when i view the /etc/config diretory, the contents shows my-secret under a timestamp directory:
directory:/etc/config/..2021_07_10_20_14_55.980073047
file:/etc/config/..2021_07_10_20_14_55.980073047/my-secret
is this normal? is there anyway i can get rid of that timestamp so I can programmatically grab the config secret?
This is the way Kubernetes mounts Secrets and ConfigMaps by default in order to propagate changes downward to those volume mounts if an upstream change occurs. If you would rather not use a symlink and want to forfeit that ability, use the subPath directive and your mount will appear as you wish.
volumeMounts:
- mountPath: /etc/config/my-secret
name: config
subPath: my-secret
readOnly: true
volumes:
- name: config
secret:
secretName: my-secret
$ k exec alpine -it -- /bin/ash
/ # ls -lah /etc/config/
total 12K
drwxr-xr-x 2 root root 4.0K Jul 10 22:58 .
drwxr-xr-x 1 root root 4.0K Jul 10 22:58 ..
-rw-r--r-- 1 root root 9 Jul 10 22:58 my-secret
/ # cat /etc/config/my-secret
hi there
I am kind of new to AKS deployment with volume mount. I want to create a pod in AKS with image; that image needs a volume mount with config.yaml file (that I already have and needs to be passed to that image to run successfully).
Below is the docker command that is working on local machine.
docker run -v <Absolute_path_of_config.yaml>:/config.yaml image:tag
I want to achieve same thing in AKS. When I tried to deploy same using Azure File Mount (with PersistentVolumeClaim) volume is getting attached. The question now is how to pass config.yaml file to that pod. I tried uploading config.yaml file to Azure File Share Volume that is attached in POD deployment without any success.
Below is the pod deployment file that I used
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mypod
image: image:tag
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 1Gi
volumeMounts:
- mountPath: "/config.yaml"
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: my-azurefile-storage
Need help regarding how I can use that local config.yaml file for aks deployment so image can run properly.
Thanks in advance.
Create a kubernetes secret using config.yaml file.
kubectl create secret generic config-yaml --from-file=config.yaml
Mount it as a volume in the pod.
apiVersion: v1
kind: Pod
metadata:
name: config
spec:
containers:
- name: config
image: alpine
command:
- cat
resources: {}
tty: true
volumeMounts:
- name: config
mountPath: /config.yaml
subPath: config.yaml
volumes:
- name: config
secret:
secretName: config-yaml
Exec to the pod and view the file.
kubectl exec -it config sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ls
bin dev home media opt root sbin sys usr
config.yaml etc lib mnt proc run srv tmp var
/ # cat config.yaml
---
apiUrl: "https://my.api.com/api/v1"
username: admin
password: password
With kubernetes, I'm trying to deploy jenkins image & a persistent volume mapped to a NFS share (which is mounted on all my workers)
So, this is my share on my workers :
[root#pp-tmp-test24 /opt]# df -Th /opt/jenkins.persistent
Filesystem Type Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP nfs4 10G 9.5M 10G 1% /opt/jenkins.persistent
And My data on this share
[root#pp-tmp-test24 /opt/jenkins.persistent]# ls -l
total 0
-rwxr-xr-x. 1 root root 0 Oct 2 11:53 newfile
[root#pp-tmp-test24 /opt/jenkins.persistent]# cat newfile
hello
Here It is my yaml files to deploy it
My PersistentVolume yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-nfs
labels:
type: type-nfs
spec:
storageClassName: class-nfs
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /opt/jenkins.persistent
My PersistentVolumeClaim yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc-nfs
namespace: ns-jenkins
spec:
storageClassName: class-nfs
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: type-nfs
And my deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: ns-jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- image: jenkins
#- image: httpd:latest
name: jenkins
ports:
- containerPort: 8080
protocol: TCP
name: jenkins-web
volumeMounts:
- name: jenkins-persistent-storage
mountPath: /var/foo
volumes:
- name: jenkins-persistent-storage
persistentVolumeClaim:
claimName: jenkins-pvc-nfs
After kubectl create -f command, all is looking good :
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
jenkins-pv-nfs 10Gi RWX Recycle Bound ns-jenkins/jenkins-pvc-nfs class-nfs 37s
# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ns-jenkins jenkins-pvc-nfs Bound jenkins-pv-nfs 10Gi RWX class-nfs 35s
# kubectl get pods -A |grep jenkins
ns-jenkins jenkins-5bdb8678c-x6vht 1/1 Running 0 14s
# kubectl describe pod jenkins-5bdb8678c-x6vht -n ns-jenkins
Name: jenkins-5bdb8678c-x6vht
Namespace: ns-jenkins
Priority: 0
Node: pp-tmp-test25.mydomain/172.31.68.225
Start Time: Wed, 02 Oct 2019 11:48:23 +0200
Labels: app=jenkins
pod-template-hash=5bdb8678c
Annotations: <none>
Status: Running
IP: 10.244.5.47
Controlled By: ReplicaSet/jenkins-5bdb8678c
Containers:
jenkins:
Container ID: docker://8a3e4871ed64b371818bac59e24d6912e5d2b13c8962c1639d36797fbce8082e
Image: jenkins
Image ID: docker-pullable://docker.io/jenkins#sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 02 Oct 2019 11:48:26 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/foo from jenkins-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dz6cd (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins-pvc-nfs
ReadOnly: false
default-token-dz6cd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dz6cd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 39s default-scheduler Successfully assigned ns-jenkins/jenkins-5bdb8678c-x6vht to pp-tmp-test25.mydomain
Normal Pulling 38s kubelet, pp-tmp-test25.mydomain Pulling image "jenkins"
Normal Pulled 36s kubelet, pp-tmp-test25.mydomain Successfully pulled image "jenkins"
Normal Created 36s kubelet, pp-tmp-test25.mydomain Created container jenkins
Normal Started 36s kubelet, pp-tmp-test25.mydomain Started container jenkins
On my worker, this is my container
# docker ps |grep jenkins
8a3e4871ed64 docker.io/jenkins#sha256:eeb4850eb65f2d92500e421b430ed1ec58a7ac909e91f518926e02473904f668 "/bin/tini -- /usr..." 2 minutes ago Up 2 minutes k8s_jenkins_jenkins-5bdb8678c-x6vht_ns-jenkins_64b66dae-a1da-4d90-83fd-ff433638dc9c_0
So I launch a shell on my container, and I can see my data on /var/foo :
# docker exec -t -i 8a3e4871ed64 /bin/bash
jenkins#jenkins-5bdb8678c-x6vht:/$ df -h /var/foo
Filesystem Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.5M 10G 1% /var/foo
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ ls -lZ /var/foo -d
drwxr-xr-x. 2 root root system_u:object_r:nfs_t:s0 4096 Oct 2 10:06 /var/foo
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ ls -lZ /var/foo
-rwxr-xr-x. 1 root root system_u:object_r:nfs_t:s0 12 Oct 2 10:05 newfile
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ cat newfile
hello
I'm trying to write data in my /var/foo/newfile but the Permission is denied
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ echo "world" >> newfile
bash: newfile: Permission denied
Same thing in my /var/foo/ directory, I can't write data
jenkins#jenkins-5bdb8678c-x6vht:/var/foo$ touch newfile2
touch: cannot touch 'newfile2': Permission denied
So, I tried an another image like httpd:latest in my deployment yaml (keeping the same name in my yaml definition)
[...]
containers:
#- image: jenkins
- image: httpd:latest
[...]
# docker ps |grep jenkins
fa562400405d docker.io/httpd#sha256:39d7d9a3ab93c0ad68ee7ea237722ed1b0016ff6974d80581022a53ec1e58797 "httpd-foreground" 50 seconds ago Up 48 seconds k8s_jenkins_jenkins-7894877f96-6dj85_ns-jenkins_540b12bd-69df-44d8-b3df-20a0a96cc851_0
In my new container, this time I can Read-Write data :
root#jenkins-7894877f96-6dj85:/usr/local/apache2# df -h /var/foo
Filesystem Size Used Avail Use% Mounted on
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.6M 10G 1% /var/foo
root#jenkins-7894877f96-6dj85:/var/foo# ls -lZ
total 0
-rwxr-xr-x. 1 root root system_u:object_r:nfs_t:s0 12 Oct 2 10:05 newfile
-rw-r--r--. 1 root root system_u:object_r:nfs_t:s0 0 Oct 2 10:06 newfile2
root#jenkins-7894877f96-6dj85:/var/foo# ls -lZ /var/foo -d
drwxr-xr-x. 2 root root system_u:object_r:nfs_t:s0 4096 Oct 2 10:06 /var/foo
root#jenkins-7894877f96-6dj85:/var/foo# ls -l
total 0
-rwxr-xr-x. 1 root root 6 Oct 2 09:55 newfile
root#jenkins-7894877f96-6dj85:/var/foo# echo "world" >> newfile
root#jenkins-7894877f96-6dj85:/var/foo# touch newfile2
root#jenkins-7894877f96-6dj85:/var/foo# ls -l
total 0
-rwxr-xr-x. 1 root root 12 Oct 2 10:05 newfile
-rw-r--r--. 1 root root 0 Oct 2 10:06 newfile2
What I'm doing wrong ? Does the pb is due to jenkins images who do not allow RW access ? Same pb with a local storage (on my worker) with persistent volume.
Other thing, perhaps it is stupid : with my jenkins image, I would like to mount the /var/jenkins_home dir to a persistent volume in order to keep jenkins's configuration files. But if I try to mount /var/jenkins_home instead of /var/foo, pod is crashinglookbackoff (because there is already data stored in /var/jenkins_home).
thank you all for your help !
I noticed You are trying to write as jenkins user on jenkins-5bdb8678c-x6vht that might not have write permissions in that root:root directory.
You might want to change that directory permissions to match jenkins user privileges.
Try to verify that this is causing this issue by using sudo before writing to file.
If you sudo is not installed then exec in with --user flag as root user. So its just like in other cases where writing worked.
docker exec -t -i -u root 8a3e4871ed64 /bin/bash
#Piotr Malec Thank you. Yes I realized that : jenkins is the default user when I connect to my container :
docker exec -t -i 46d2497d440d /bin/bash
jenkins#jenkins-7bcdd5db57-8qgth:/$
So I have changed permissions on this /opt/jenkins.persistent to 777 on my worker, in order to try and now I have RW perm on this mount :
xxx.xxx.xxx.xxx:/VR_C_CS003_NFS_KUBERNETESPV_TMP_PP 10G 9.5M 10G 1% /var/foo
jenkins#jenkins-7bcdd5db57-8qgth:/$ cd /var
jenkins#jenkins-7bcdd5db57-8qgth:/$ ls -l
[...]
drwxrwxrwx. 2 root root 4096 Oct 4 13:41 foo
[...]
jenkins#jenkins-7bcdd5db57-8qgth:/$ cd /var/foo
jenkins#jenkins-7bcdd5db57-8qgth:/var/foo $ touch newfile
jenkins#jenkins-7bcdd5db57-8qgth:/var/foo $ ls
newfile
So I added jenkins user account on my worker and set chown jenkins:jenkins on my /opt/jenkins.persistent directory. Now, inside my container I have RW perm :
jenkins#jenkins-7bcdd5db57-8qgth:/var$ ls -l
[...]
drwxr-xr-x. 2 jenkins jenkins 4096 Oct 4 13:53 foo
[...]
jenkins#jenkins-7bcdd5db57-8qgth:/var$ cd foo
jenkins#jenkins-7bcdd5db57-8qgth:/var/toto$ touch newfile2
jenkins#jenkins-7bcdd5db57-8qgth:/var/toto$ ls -l
-rw-r--r--. 1 jenkins jenkins 0 Oct 4 13:53 newfile2
I'm using two VMs with Atomic Host (1 Master, 1 Node; Centos Image). I want to use NFS shares from another VM (Ubuntu Server 16.04) as persistent volumes for my pods. I can mount them manually and in Kubernetes (Version 1.5.2) the persistent volumes are successfully created and bound to my PVCs. Also they are mounted in my pods. But when I try to write or even read from the corresponding folder inside the pod, I get the error Permission denied. From my research I think, the problem lies within the folders permission/owner/group on my NFS Host.
My exports file on the Ubuntu VM (/etc/exports) has 10 shares with the following pattern (The two IPs are the IPs of my Atomic Host Master and Node):
/home/user/pv/pv01 192.168.99.101(rw,insecure,async,no_subtree_check,no_root_squash) 192.168.99.102(rw,insecure,async,no_subtree_check,no_root_squash)
In the image for my pods I create a new user named guestbook, so that the container doesn't use a privileged user, as this insecure. I read many post like this one, that state, you have to set the permissions to world-writable or using the same UID and GID for the shared folders. So in my Dockerfile I create the guestbook user with the UID 1003 and a group with the same name and GID 1003:
RUN groupadd -r guestbook -g 1003 && useradd -u 1003 -r -g 1003 guestbook
On my NFS Host I also have a user named guestbook with UID 1003 as a member of the group nfs with GID 1003. The permissions of the shared folders (with ls -l) are as following:
drwxrwxrwx 2 guestbook nfs 4096 Feb 19 11:23 pv01
(world writable, owner guestbook, group nfs). In my Pod I can see the permissions of the mounted folder /data (again with ls -l) as:
drwxrwxrwx. 2 guestbook guestbook 4096 Feb 9 13:37 data
The persistent Volumes are created with an YAML file with the pattern:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
annotations:
pv.beta.kubernetes.io/gid: "1003"
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /home/user/pv/pv01
server: 192.168.99.104
The Pod is created with this YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: get-started
spec:
replicas: 3
template:
metadata:
labels:
app: get-started
spec:
containers:
- name: get-started
image: docker.io/cebberg/get-started:custom5
ports:
- containerPort: 2525
env:
- name: GET_HOSTS_FROM
value: dns
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: database-password
volumeMounts:
- name: log-storage
mountPath: "/data/"
imagePullPolicy: Always
securityContext:
privileged: false
volumes:
- name: log-storage
persistentVolumeClaim:
claimName: get-started
restartPolicy: Always
dnsPolicy: ClusterFirst
And the PVC with YAML file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: get-started
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
I tried different configuration for the owner/group of the folders. If I use my normal user (which is the same on all systems) as owner and group, I can mount manually and read and write in the folder. But I don't want to use my normal user, but use another user (and especially not a privileged user).
What permissions do I have to set, so that the user I create in my Pod can write to the NFS volume?
I found the solution to my problem:
By accident I found log entries, that appear everytime I try to access the NFS volumes from my pods. They say, that SELinux has blocked the access to the folder because of different security context.
To resolve the issue, I simply had to turn on the corresponding SELinux boolean virt_use_nfs with the command
setsebool virt_use_nfs on
This has to be done on all nodes to make it work correctly.
EDIT:
I remembered, that I now use sec=sys as mount option in /etc/exports. This provides access controll based on UID and GID of the user creating a file (which seems to be the default). If you use sec=none you also have to turn on the SELinux boolean nfsd_anon_write, so that the user nfsnobody has the permission to create files.