Cloud Run Secret Reference getting mounted as Directory instead of File - google-cloud-run

Need some help with Cloud Run with Secret Manager, we need to mount 2 secrets as volume (file only), following is the yaml from Cloud Run.
volumeMounts:
- name: secret-2f1d5ec9-d681-4b0f-8a77-204c5f853330
readOnly: true
mountPath: /root/key/mtls/client_auth.p12
- name: secret-29c1417a-d9fe-4c37-8cb0-562c97f3c827
readOnly: true
mountPath: /root/key/firebase/myapp-d2a0f-firebase-adminsdk-irfes-a699971a4d.json
volumes:
- name: secret-2f1d5ec9-d681-4b0f-8a77-204c5f853330
secret:
secretName: myapp_mtls_key
items:
- key: latest
path: myapp_mtls_key
- name: secret-29c1417a-d9fe-4c37-8cb0-562c97f3c827
secret:
secretName: myapp_firebase_token
items:
- key: latest
path: myapp_firebase_token
mtls secret (p12 file) is getting mounted properly as a file but the firebase secret (json file) is getting mounted as a directory instead.
java.io.FileNotFoundException: /root/key/firebase/myapp-d2a0f-firebase-adminsdk-irfes-a699971a4d.json (Is a directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:216)
at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
at java.base/java.io.FileInputStream.<init>(FileInputStream.java:111)
at com.myapp.gcp.GCPInit.init(GCPInit.java:39)
Based on docker convention, if a file is not found on the host then its mounted as directory, but in this case we do not have control over the host path or file availability, so could it be a bug?
When testing our deployment in docker container with volume mounts everything works fine so we are sure our application is not at fault.
Appreciate any guidance on this issue.
Thanks

Related

Rclone mount shared between containers in the same K8s pod

In my k8s pod, I want to give a container access to a S3 bucket, mounted with rclone.
Now, the container running rclone needs to run with --privileged, which is a problem for me, since my main-container will run user code which I have no control of and can be potentially harmful to my Pod.
The solution I’m trying now is to have a sidecar-container just for the task of running rclone, mounting S3 in a /shared_storage folder, and sharing this folder with the main-container through a Volume shared-storage. This is a simplified pod.yml file:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-storage
emptyDir: {}
containers:
- name: main-container
image: busybox
command: ["sh", "-c", "sleep 1h"]
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
# mountPropagation: HostToContainer
- name: sidecar-container
image: mycustomsidecarimage
securityContext:
privileged: true
command: ["/bin/bash"]
args: ["-c", "python mount_source.py"]
env:
- name: credentials
value: XXXXXXXXXXX
volumeMounts:
- name: shared-storage
mountPath: /shared_storage
mountPropagation: Bidirectional
The pod runs fine and from sidecar-container I can read, create and delete files from my S3 bucket.
But from main-container no files are listed inside of shared_storage. I can create files (if I set readOnly: false) but those do not appear in sidecar-container.
If I don’t run the rclone mount to that folder, the containers are able to share files again. So that tells me that is something about the rclone process not letting main-container read from it.
In mount_source.py I am running rclone with --allow-other and I have edit etc/fuse.conf as suggested here.
Does anyone have an idea on how to solve this problem?
I've managed to make it work by using:
mountPropagation: HostToContainer on main-container
mountPropagation: Bidirectional on sidecar-container
I can control read/write permissions to specific mounts using readOnly: true/false on main-container. This is of course also possible to set within rclone mount command.
Now the main-container does not need to run in privileged mode and my users code can have access to their s3 buckets through those mount points!
Interestingly, it doesn't seem to work if I set volumeMount:mountPath to be a sub-folder of the rclone mounted path. So if I want to grant main-container different read/write permissions to different subpaths, I had to create a separate rclone mount for each sub-folder.
I'm not 100% sure if there's any extra security concerns with that approach though.

How does AKS handle the .env file in a container?

Assume there is a backend application with a private key stored in a .env file.
For the project file structure:
|-App files
|-Dockerfile
|-.env
If I run the docker image locally, the application can be reached normally by using a valid public key during the API request. However, if I deploy the container into AKS cluster by using same docker image, the application failed.
I am wondering how the container in a AKS cluster handle the .env file. What should I do to solve this problem?
Moving this out of comments for better visibility.
First and most important is docker is not the same as kubernetes. What works on docker, won't work directly on kubernetes. Docker is a container runtime, while kubernetes is a container orchestration tool which sits on top of docker (not always docker now, containerd is used as well).
There are many resources on the internet which describe the key difference. For example this one is from microsoft docs
First configmaps and secrets should be created:
Creating and managing configmaps and creating and managing secrets
There are different types of secrets which can be created.
Use configmaps/secrets as environment variables.
Further referring to configMaps and secrets as environment variables looks like (configmaps and secrets have the same structure):
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- ...
env:
-
name: ADMIN_PASS
valueFrom:
secretKeyRef: # here secretref is used for sensitive data
key: admin
name: admin-password
-
name: MYSQL_DB_STRING
valueFrom:
configMapKeyRef: # this is not sensitive data so can be used configmap
key: db_config
name: connection_string
...
Use configmaps/secrets as volumes (it will be presented as file).
Below the example of using secrets as files mounted in a specific directory:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- ...
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
There's a good article which explains and shows use cases of secrets as well as its limitations e.g. size is limited to 1Mb.

Configure Mutagen to sync host files with a container, inside Kubernetes?

How should Mutagen be configured, to synchronise local-host source code files with a Docker volume, onto a Kubernetes cluster?
I used to mount my project directory onto a container, using hostPath:
kind: Deployment
spec:
volumes:
- name: "myapp-src"
hostPath:
path: "path/to/project/files/on/localhost"
type: "Directory"
...
containers:
- name: myapp
volumeMounts:
- name: "myapp-src"
mountPath: "/app/"
but this has permission and symlinks problems, that I need to solve using Mutagen.
At the moment, it works correctly when relying on docker-compose (run via mutagen project start -f path/to/mutagen.yml):
sync:
defaults:
symlink:
mode: ignore
permissions:
defaultFileMode: 0664
defaultDirectoryMode: 0775
myapp-src:
alpha: "../../"
beta: "docker://myapp-mutagen/myapp-src"
mode: "two-way-safe"
But it isn't clear to me how to configure the K8S Deployment, in order to use Mutagen for keeping the myapp-src volume in sync with localhost?

Kubernetes - cannot have Windows path mounted on Azure File Share (Linux mounting works properly )

Firstly I succesfully mounted, my Linux path on Pod.
I used azure file share and mounted folders appear on File Share.
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: faselectaksshare
readOnly: false
Now on File Share I added one subfolder "windows" for mounting, in logs it mentions it is being mounted properly but I do not have anything mounted (folders and files do not appear on mounted share like it is the case for Linux)
args: [ "-license_file", "C:/Host/dat/license.dat",
"-key_file", "C:/Host/dat/license.key"]
volumeMounts:
- name: ads-win-filesharevolume
mountPath: "C:\\host"
volumes:
- name: ads-win-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: faselectaksshare\windows
readOnly: false
For mountPath I tried with: C:\\host and C:/host and /c/host
Also for shareName I initially tried with faselectaksshare/windows but it threw an exception.
In Pod describe I can see that everything seems OK, but my expected folders from C:/host do not appear in my Azure File Share path in windows subfolder. I receive similar output for all other cases as well.
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nx49r (ro)
C:/host from ads-win-filesharevolume (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ads-win-filesharevolume:
Type: AzureFile (an Azure File Service mount on the host and bind mount to the pod)
SecretName: fa-fileshare-secret
ShareName: faselectaksshare\windows
ReadOnly: false
Please help! Thanks
UPDATE:
I also tried this approach with subPath and again I do not get any folders mounted. Also I do not get any error in log or in describe pod command
volumeMounts:
- name: ads-filesharevolume
mountPath: /host
subPath: windows
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: faselectaksshare
readOnly: false
Both Windows and Linux containers run at the same time:
Mount for Linux:
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
shareName: aksshare/linux
secretName: azure-secret
Mount for Windows:
volumeMounts:
- name: azure
mountPath: "C:\\fileshare"
volumes:
- name: azure
azureFile:
shareName: aksshare\windows
secretName: azure-secret
And the files that exist in each subfolder of the file share do not affect other ones.
According to the following thread, wsl2 doesn't yet support hostPath volumes.
Thread Source: https://github.com/docker/for-win/issues/5325
Look at this comment: https://github.com/docker/for-win/issues/5325#issuecomment-570683131
Try changing this line
# For "C://host"
mountPath: /run/desktop/mnt/host/c/host
kubelet is supposed to mount the Azure File Share into the Container
It uses https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/mount-utils/mount_windows.go and https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/azure_file/azure_file.go
It uses SMB Mapping and then mklink to mount Azure File Share into the Container
Please start kubelet in Windows Node where the Pod is running and the Azure File Share is supposed to be mounted with --v 4 Flag so we get to see debug messages in kubelet log when it tries to mount Azure File Share into the Container. Then please provide the messages in kubelet log. You should see below messages from https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/mount-utils/mount_windows.go
klog.V(3).Infof("mounting source (%q), target (%q), with options (%q)", source, target, sanitizedOptionsForLogging)
klog.V(4).Infof("mount options(%q) source:%q, target:%q, fstype:%q, begin to mount",
sanitizedOptionsForLogging, source, target, fstype)
klog.Warningf("SMB Mapping(%s) returned with error(%v), output(%s)", source, err, string(output))
klog.V(2).Infof("SMB Mapping(%s) already exists while it's not valid, return error: %v, now begin to remove and remount", source, err)
klog.V(2).Infof("SMB Mapping(%s) already exists and is still valid, skip error(%v)", source, err)
klog.Errorf("mklink failed: %v, source(%q) target(%q) output: %q", err, mklinkSource, target, string(output))
klog.V(2).Infof("mklink source(%q) on target(%q) successfully, output: %q", mklinkSource, target, string(output))

Fluentd not able to access the logs present under /var/lib/docker/containers due to permission issue

I am trying to read the container logs through fluentd and pass it to the elastic search. I have mounted the directories from the host onto fluentd container which include all symlinks and actual files.
But when I see the fluentd container logs , it say those logs, present under /var/log/pods/ are unreadable. Then I manually navigated to the path under fluentd container where logs are present but unfortunately I got permission denied issue.
I went till /var/lib/docker/containers , then the permissions were 0700 and owner was root. Even I am trying to run my fluentd container by setting
- name: FLUENT_UID
value: "0"
But still it is not able to read.
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
.....
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
You should take a look at security contexts. Among other things they allow you to specify the user that will run in the container with runAsUser, the primary group of that user with runAsGroup, and the volume owner with fsGroup.

Resources