Fluentd not able to access the logs present under /var/lib/docker/containers due to permission issue - docker

I am trying to read the container logs through fluentd and pass it to the elastic search. I have mounted the directories from the host onto fluentd container which include all symlinks and actual files.
But when I see the fluentd container logs , it say those logs, present under /var/log/pods/ are unreadable. Then I manually navigated to the path under fluentd container where logs are present but unfortunately I got permission denied issue.
I went till /var/lib/docker/containers , then the permissions were 0700 and owner was root. Even I am trying to run my fluentd container by setting
- name: FLUENT_UID
value: "0"
But still it is not able to read.
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
.....
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers

You should take a look at security contexts. Among other things they allow you to specify the user that will run in the container with runAsUser, the primary group of that user with runAsGroup, and the volume owner with fsGroup.

Related

Rclone mount shared between containers in the same K8s pod

In my k8s pod, I want to give a container access to a S3 bucket, mounted with rclone.
Now, the container running rclone needs to run with --privileged, which is a problem for me, since my main-container will run user code which I have no control of and can be potentially harmful to my Pod.
The solution I’m trying now is to have a sidecar-container just for the task of running rclone, mounting S3 in a /shared_storage folder, and sharing this folder with the main-container through a Volume shared-storage. This is a simplified pod.yml file:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-storage
emptyDir: {}
containers:
- name: main-container
image: busybox
command: ["sh", "-c", "sleep 1h"]
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
# mountPropagation: HostToContainer
- name: sidecar-container
image: mycustomsidecarimage
securityContext:
privileged: true
command: ["/bin/bash"]
args: ["-c", "python mount_source.py"]
env:
- name: credentials
value: XXXXXXXXXXX
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
mountPropagation: Bidirectional
The pod runs fine and from sidecar-container I can read, create and delete files from my S3 bucket.
But from main-container no files are listed inside of shared_storage. I can create files (if I set readOnly: false) but those do not appear in sidecar-container.
If I don’t run the rclone mount to that folder, the containers are able to share files again. So that tells me that is something about the rclone process not letting main-container read from it.
In mount_source.py I am running rclone with --allow-other and I have edit etc/fuse.conf as suggested here.
Does anyone have an idea on how to solve this problem?
I've managed to make it work by using:
mountPropagation: HostToContainer on main-container
mountPropagation: Bidirectional on sidecar-container
I can control read/write permissions to specific mounts using readOnly: true/false on main-container. This is of course also possible to set within rclone mount command.
Now the main-container does not need to run in privileged mode and my users code can have access to their s3 buckets through those mount points!
Interestingly, it doesn't seem to work if I set volumeMount:mountPath to be a sub-folder of the rclone mounted path. So if I want to grant main-container different read/write permissions to different subpaths, I had to create a separate rclone mount for each sub-folder.
I'm not 100% sure if there's any extra security concerns with that approach though.

Configure Mutagen to sync host files with a container, inside Kubernetes?

How should Mutagen be configured, to synchronise local-host source code files with a Docker volume, onto a Kubernetes cluster?
I used to mount my project directory onto a container, using hostPath:
kind: Deployment
spec:
volumes:
- name: "myapp-src"
hostPath:
path: "path/to/project/files/on/localhost"
type: "Directory"
...
containers:
- name: myapp
volumeMounts:
- name: "myapp-src"
mountPath: "/app/"
but this has permission and symlinks problems, that I need to solve using Mutagen.
At the moment, it works correctly when relying on docker-compose (run via mutagen project start -f path/to/mutagen.yml):
sync:
defaults:
symlink:
mode: ignore
permissions:
defaultFileMode: 0664
defaultDirectoryMode: 0775
myapp-src:
alpha: "../../"
beta: "docker://myapp-mutagen/myapp-src"
mode: "two-way-safe"
But it isn't clear to me how to configure the K8S Deployment, in order to use Mutagen for keeping the myapp-src volume in sync with localhost?

Kubernetes - cannot have Windows path mounted on Azure File Share (Linux mounting works properly )

Firstly I succesfully mounted, my Linux path on Pod.
I used azure file share and mounted folders appear on File Share.
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: faselectaksshare
readOnly: false
Now on File Share I added one subfolder "windows" for mounting, in logs it mentions it is being mounted properly but I do not have anything mounted (folders and files do not appear on mounted share like it is the case for Linux)
args: [ "-license_file", "C:/Host/dat/license.dat",
"-key_file", "C:/Host/dat/license.key"]
volumeMounts:
- name: ads-win-filesharevolume
mountPath: "C:\\host"
volumes:
- name: ads-win-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: faselectaksshare\windows
readOnly: false
For mountPath I tried with: C:\\host and C:/host and /c/host
Also for shareName I initially tried with faselectaksshare/windows but it threw an exception.
In Pod describe I can see that everything seems OK, but my expected folders from C:/host do not appear in my Azure File Share path in windows subfolder. I receive similar output for all other cases as well.
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nx49r (ro)
C:/host from ads-win-filesharevolume (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ads-win-filesharevolume:
Type: AzureFile (an Azure File Service mount on the host and bind mount to the pod)
SecretName: fa-fileshare-secret
ShareName: faselectaksshare\windows
ReadOnly: false
Please help! Thanks
UPDATE:
I also tried this approach with subPath and again I do not get any folders mounted. Also I do not get any error in log or in describe pod command
volumeMounts:
- name: ads-filesharevolume
mountPath: /host
subPath: windows
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: faselectaksshare
readOnly: false
Both Windows and Linux containers run at the same time:
Mount for Linux:
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
shareName: aksshare/linux
secretName: azure-secret
Mount for Windows:
volumeMounts:
- name: azure
mountPath: "C:\\fileshare"
volumes:
- name: azure
azureFile:
shareName: aksshare\windows
secretName: azure-secret
And the files that exist in each subfolder of the file share do not affect other ones.
According to the following thread, wsl2 doesn't yet support hostPath volumes.
Thread Source: https://github.com/docker/for-win/issues/5325
Look at this comment: https://github.com/docker/for-win/issues/5325#issuecomment-570683131
Try changing this line
# For "C://host"
mountPath: /run/desktop/mnt/host/c/host
kubelet is supposed to mount the Azure File Share into the Container
It uses https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/mount-utils/mount_windows.go and https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/azure_file/azure_file.go
It uses SMB Mapping and then mklink to mount Azure File Share into the Container
Please start kubelet in Windows Node where the Pod is running and the Azure File Share is supposed to be mounted with --v 4 Flag so we get to see debug messages in kubelet log when it tries to mount Azure File Share into the Container. Then please provide the messages in kubelet log. You should see below messages from https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/mount-utils/mount_windows.go
klog.V(3).Infof("mounting source (%q), target (%q), with options (%q)", source, target, sanitizedOptionsForLogging)
klog.V(4).Infof("mount options(%q) source:%q, target:%q, fstype:%q, begin to mount",
sanitizedOptionsForLogging, source, target, fstype)
klog.Warningf("SMB Mapping(%s) returned with error(%v), output(%s)", source, err, string(output))
klog.V(2).Infof("SMB Mapping(%s) already exists while it's not valid, return error: %v, now begin to remove and remount", source, err)
klog.V(2).Infof("SMB Mapping(%s) already exists and is still valid, skip error(%v)", source, err)
klog.Errorf("mklink failed: %v, source(%q) target(%q) output: %q", err, mklinkSource, target, string(output))
klog.V(2).Infof("mklink source(%q) on target(%q) successfully, output: %q", mklinkSource, target, string(output))

How do you get UID:GID in the [1002120000, 1002129999] range to make it run in OpenShift?

This is with OpenShift Container Platform 4.3.
Consider this Dockerfile.
FROM eclipse-mosquitto
# Create folders
USER root
RUN mkdir -p /mosquitto/data /mosquitto/log
# mosquitto configuration
USER mosquitto
# This is crucial to me
COPY --chown=mosquitto:mosquitto ri45.conf /mosquitto/config/mosquitto.conf
EXPOSE 1883
And, this is my Deployment YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto-broker
spec:
selector:
matchLabels:
app: mosquitto-broker
template:
metadata:
labels:
app: mosquitto-broker
spec:
containers:
- name: mosquitto-broker
image: org/repo/eclipse-mosquitto:1.0.1
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: mosquitto-data
mountPath: /mosquitto/data
- name: mosquitto-log
mountPath: /mosquitto/log
ports:
- name: mqtt
containerPort: 1883
volumes:
- name: mosquitto-log
persistentVolumeClaim:
claimName: mosquitto-log
- name: mosquitto-data
persistentVolumeClaim:
claimName: mosquitto-data
When I do a oc create -f with the above YAML, I get this error, 2020-06-02T07:59:59: Error: Unable to open log file /mosquitto/log/mosquitto.log for writing. Maybe this is a permissions error; can't tell. Anyway, going by the eclipse/mosquitto Dockerfile, I see that mosquitto is a user with UID and GID of 1883. So, I added the securityContext as described here.
securityContext:
fsGroup: 1883
When I do a oc create -f with this modification, I get this error - securityContext.securityContext.runAsUser: Invalid value: 1883: must be in the ranges: [1002120000, 1002129999].
This approach of adding an initContainer to set permissions on volume does not work for me because, I have to be root to do that.
So, how do I enable the Eclipse mosquitto container to write to /mosquitto/log successfully?
There are multiple things to address here.
First off, you should make sure that you really want to bake a configuration file into your container image. Typically, configuration files are added via ConfigMaps or Secrets, as the configuration in cloud-native applications should typically come from the environment (OpenShift in your case).
Secondly, it seems that you are logging into a PersistentVolume, which is also a terrible practice, as the best practice would be to log to stdout. Of course, having application data (transaction logs) on a persistent volume makes sense.
As for your original question (that should no longer be relevant given the two points above), the issue can be approached using SecurityContextContraints (SCCs): Managing Security Context Constraints
So to resolve your issue you should use / create a SCC with runAsUser set correctly.

Configure NFS share for Kubernetes 1.5 on Atomic Host: Permission Denied

I'm using two VMs with Atomic Host (1 Master, 1 Node; Centos Image). I want to use NFS shares from another VM (Ubuntu Server 16.04) as persistent volumes for my pods. I can mount them manually and in Kubernetes (Version 1.5.2) the persistent volumes are successfully created and bound to my PVCs. Also they are mounted in my pods. But when I try to write or even read from the corresponding folder inside the pod, I get the error Permission denied. From my research I think, the problem lies within the folders permission/owner/group on my NFS Host.
My exports file on the Ubuntu VM (/etc/exports) has 10 shares with the following pattern (The two IPs are the IPs of my Atomic Host Master and Node):
/home/user/pv/pv01 192.168.99.101(rw,insecure,async,no_subtree_check,no_root_squash) 192.168.99.102(rw,insecure,async,no_subtree_check,no_root_squash)
In the image for my pods I create a new user named guestbook, so that the container doesn't use a privileged user, as this insecure. I read many post like this one, that state, you have to set the permissions to world-writable or using the same UID and GID for the shared folders. So in my Dockerfile I create the guestbook user with the UID 1003 and a group with the same name and GID 1003:
RUN groupadd -r guestbook -g 1003 && useradd -u 1003 -r -g 1003 guestbook
On my NFS Host I also have a user named guestbook with UID 1003 as a member of the group nfs with GID 1003. The permissions of the shared folders (with ls -l) are as following:
drwxrwxrwx 2 guestbook nfs 4096 Feb 19 11:23 pv01
(world writable, owner guestbook, group nfs). In my Pod I can see the permissions of the mounted folder /data (again with ls -l) as:
drwxrwxrwx. 2 guestbook guestbook 4096 Feb 9 13:37 data
The persistent Volumes are created with an YAML file with the pattern:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
annotations:
pv.beta.kubernetes.io/gid: "1003"
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /home/user/pv/pv01
server: 192.168.99.104
The Pod is created with this YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: get-started
spec:
replicas: 3
template:
metadata:
labels:
app: get-started
spec:
containers:
- name: get-started
image: docker.io/cebberg/get-started:custom5
ports:
- containerPort: 2525
env:
- name: GET_HOSTS_FROM
value: dns
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: database-password
volumeMounts:
- name: log-storage
mountPath: "/data/"
imagePullPolicy: Always
securityContext:
privileged: false
volumes:
- name: log-storage
persistentVolumeClaim:
claimName: get-started
restartPolicy: Always
dnsPolicy: ClusterFirst
And the PVC with YAML file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: get-started
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
I tried different configuration for the owner/group of the folders. If I use my normal user (which is the same on all systems) as owner and group, I can mount manually and read and write in the folder. But I don't want to use my normal user, but use another user (and especially not a privileged user).
What permissions do I have to set, so that the user I create in my Pod can write to the NFS volume?
I found the solution to my problem:
By accident I found log entries, that appear everytime I try to access the NFS volumes from my pods. They say, that SELinux has blocked the access to the folder because of different security context.
To resolve the issue, I simply had to turn on the corresponding SELinux boolean virt_use_nfs with the command
setsebool virt_use_nfs on
This has to be done on all nodes to make it work correctly.
EDIT:
I remembered, that I now use sec=sys as mount option in /etc/exports. This provides access controll based on UID and GID of the user creating a file (which seems to be the default). If you use sec=none you also have to turn on the SELinux boolean nfsd_anon_write, so that the user nfsnobody has the permission to create files.

Resources