I am somewhat new to containers and I am looking for clarity on how best to establish a user with non-root privileges. This container is for Azure DevOps agent. By default, it appears the Dockerfile will run as root since there is nothing more specific specified in terms of users or UID.
Doing some research on this, I came across this VS Code tutorial that specifies how to setup a user with non-root privileges. Interesting enough, but it is unclear how that UID (in this case 1000) should work with the deployment yaml (below) in terms of the UID value.
If I specify a UID of 1000 in the Dockerfile, does that mean that I must also specify 1000 in the deployment yaml as I have below or are these UIDs completely separate and have nothing to do with each other?
Thanks for your input. Learning as I go along...
apiVersion: apps/v1
kind: Deployment
metadata:
name: az-devops-locks
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: az-devops-locks
app.kubernetes.io/instance: locks
template:
metadata:
labels:
app.kubernetes.io/name: az-devops-locks
app.kubernetes.io/instance: locks
aadpodidbinding: azdomilocks
spec:
securityContext:
runAsUser: 1000
containers:
- name: "az-devops-locks"
image: "xxxxx.azurecr.io/ado-agent:latest"
securityContext:
runAsUser: 1000
allowPrivilegedEscalation: false
env:
- name: AZP_URL
value: https://dev.azure.com/yyy
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: ADOxxxx
key: AZP_TOKEN
- name: AZP_POOL
value: Pool01
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
By default, it appears the Dockerfile will run as root since there is nothing more specific specified in terms of users or UID.
Actually no: it depends on your base image.
if said base image own Dockerfile specified a USER (said USER 1000)
and your Dockerfile does not specify a USER
then your own built image will inherit the USER of the base image.
If I specify a UID of 1000 in the Dockerfile, does that mean that I must also specify 1000 in the deployment yaml
You do not have to, but that is a way to enforce that the container will not use any other user to write file in your mounted folder.
Because if it was using any other user, since its process will run with the user ID you specify, it would not have the right to do any chow (as opposed to the default owner of a container process: root, which has the right to do... anything).
See "Set the security context for a Pod":
In the configuration file, the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000.
The runAsGroup field specifies the primary group ID of 3000 for all processes within any containers of the Pod.
If this field is omitted, the primary group ID of the containers will be root(0).
Related
According to this source, I can store to my/data/folder by following:
docker run -d -p 80:80 -v {/my/data/folder}:/data bluespice/bluespice-free
I have created following deployment but not sure how to use persistent volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: bluespice
namespace: default
labels:
app: bluespice
spec:
replicas: 1
selector:
matchLabels:
app: bluespice
template:
metadata:
labels:
app: bluespice
spec:
containers:
- name: bluespice
image: bluespice/bluespice-free
ports:
- containerPort: 80
env:
- name: bs_url
value: "https://bluespice.mycompany.local"
My persistent volume claim name is bluespice-pvc.
Also I have deployed the pod without persistent volume. Can I attach persistent volume on the fly to keep data?
if you want to mount a local directory, you don't have to deal with PVC since you can't force a specific host path in a PersistentVolumeClaim. For testing locally, you can use hostPath as it explained in the documentation:
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
For example, some uses for a hostPath are:
running a container that needs access to Docker internals; use a hostPath of /var/lib/docker
running cAdvisor in a container; use a hostPath of /sys
allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
In addition to the required path property, you can optionally specify a type for a hostPath volume.
hostPath configuration example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bluespice
namespace: default
labels:
app: bluespice
spec:
replicas: 1
selector:
matchLabels:
app: bluespice
template:
metadata:
labels:
app: bluespice
spec:
containers:
- image: bluespice/bluespice-free
name: bluespice
volumeMounts:
- mountPath: /data
name: bluespice-volume
volumes:
- name: bluespice-volume
hostPath:
# directory location on host
path: /my/data/folder
# this field is optional
type: Directory
However, if you want to move to a production cluster, you should consider more reliable option since allowing HostPaths has a lack of security and it's not portable:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.
For more information about PersistentVolumes, you can check the official Kubernetes documents
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).
Therefore, I would recommend to use some cloud solutions like GCP or AWS, or at least by using a NFS share directly from Kubernetes. Also check this topic on StackOverFlow.
About your last question: it's impossible to attach Persistent Volume on the fly.
I'm creating an application that is using helm(v3.3.0) + k3s. A program in a container uses different configuration files. As of now there are just few config files (that I added manually before building the image) but I'd like to add the possibility to add them dynamically when the container is running and not to lose them once the container/pod is dead. In docker I'd do that by exposing a folder like this:
docker run [image] -v /host/path:/container/path
Is there an equivalent for helm?
If not how would you suggest to solve this issue without stopping using helm/k3s?
In Kubernetes (Helm is just a tool for it) you need to do two things to mount host path inside container:
spec:
volumes:
# 1. Declare a 'hostPath' volume under pod's 'volumes' key:
- name: name-me
hostPath:
path: /path/on/host
containers:
- name: foo
image: bar
# 2. Mount the declared volume inside container using volume name
volumeMounts:
- name: name-me
mountPath: /path/in/container
Lots of other volumes types and examples in Kubernetes documentation.
Kubernetes has a dedicated construct for holding configuration files, ConfigMaps. Helm in turn has support for Accessing Files Inside Templates which can help you copy them into ConfigMap objects. A minimal setup here would look like:
# templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.ini: |
{{ .Files.Get "config.ini" | indent 4 }}
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment:
metadata: { ... }
spec:
template:
spec:
volumes:
- name: config-data
configMap:
name: my-config # matches ConfigMap metadata: { name: }
containers:
- volumeMounts:
- name: config-data # matches volume name: in this file
mountPath: /container/path
You can use Helm's templating constructs in various ways here: to dynamically construct the contents of the ConfigMap, to set an environment variable saying which file to use, and so on.
Do not use hostPath volumes here. Since Kubernetes is designed as a clustered environment, you do not have much control over which node a given pod will run on; you would have to copy these config files to every node in the cluster and try to update them all when a file changed. That's a huge maintenance problem, especially if you don't have direct filesystem access to the nodes.
This is with OpenShift Container Platform 4.3.
Consider this Dockerfile.
FROM eclipse-mosquitto
# Create folders
USER root
RUN mkdir -p /mosquitto/data /mosquitto/log
# mosquitto configuration
USER mosquitto
# This is crucial to me
COPY --chown=mosquitto:mosquitto ri45.conf /mosquitto/config/mosquitto.conf
EXPOSE 1883
And, this is my Deployment YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto-broker
spec:
selector:
matchLabels:
app: mosquitto-broker
template:
metadata:
labels:
app: mosquitto-broker
spec:
containers:
- name: mosquitto-broker
image: org/repo/eclipse-mosquitto:1.0.1
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
volumeMounts:
- name: mosquitto-data
mountPath: /mosquitto/data
- name: mosquitto-log
mountPath: /mosquitto/log
ports:
- name: mqtt
containerPort: 1883
volumes:
- name: mosquitto-log
persistentVolumeClaim:
claimName: mosquitto-log
- name: mosquitto-data
persistentVolumeClaim:
claimName: mosquitto-data
When I do a oc create -f with the above YAML, I get this error, 2020-06-02T07:59:59: Error: Unable to open log file /mosquitto/log/mosquitto.log for writing. Maybe this is a permissions error; can't tell. Anyway, going by the eclipse/mosquitto Dockerfile, I see that mosquitto is a user with UID and GID of 1883. So, I added the securityContext as described here.
securityContext:
fsGroup: 1883
When I do a oc create -f with this modification, I get this error - securityContext.securityContext.runAsUser: Invalid value: 1883: must be in the ranges: [1002120000, 1002129999].
This approach of adding an initContainer to set permissions on volume does not work for me because, I have to be root to do that.
So, how do I enable the Eclipse mosquitto container to write to /mosquitto/log successfully?
There are multiple things to address here.
First off, you should make sure that you really want to bake a configuration file into your container image. Typically, configuration files are added via ConfigMaps or Secrets, as the configuration in cloud-native applications should typically come from the environment (OpenShift in your case).
Secondly, it seems that you are logging into a PersistentVolume, which is also a terrible practice, as the best practice would be to log to stdout. Of course, having application data (transaction logs) on a persistent volume makes sense.
As for your original question (that should no longer be relevant given the two points above), the issue can be approached using SecurityContextContraints (SCCs): Managing Security Context Constraints
So to resolve your issue you should use / create a SCC with runAsUser set correctly.
I'm using two VMs with Atomic Host (1 Master, 1 Node; Centos Image). I want to use NFS shares from another VM (Ubuntu Server 16.04) as persistent volumes for my pods. I can mount them manually and in Kubernetes (Version 1.5.2) the persistent volumes are successfully created and bound to my PVCs. Also they are mounted in my pods. But when I try to write or even read from the corresponding folder inside the pod, I get the error Permission denied. From my research I think, the problem lies within the folders permission/owner/group on my NFS Host.
My exports file on the Ubuntu VM (/etc/exports) has 10 shares with the following pattern (The two IPs are the IPs of my Atomic Host Master and Node):
/home/user/pv/pv01 192.168.99.101(rw,insecure,async,no_subtree_check,no_root_squash) 192.168.99.102(rw,insecure,async,no_subtree_check,no_root_squash)
In the image for my pods I create a new user named guestbook, so that the container doesn't use a privileged user, as this insecure. I read many post like this one, that state, you have to set the permissions to world-writable or using the same UID and GID for the shared folders. So in my Dockerfile I create the guestbook user with the UID 1003 and a group with the same name and GID 1003:
RUN groupadd -r guestbook -g 1003 && useradd -u 1003 -r -g 1003 guestbook
On my NFS Host I also have a user named guestbook with UID 1003 as a member of the group nfs with GID 1003. The permissions of the shared folders (with ls -l) are as following:
drwxrwxrwx 2 guestbook nfs 4096 Feb 19 11:23 pv01
(world writable, owner guestbook, group nfs). In my Pod I can see the permissions of the mounted folder /data (again with ls -l) as:
drwxrwxrwx. 2 guestbook guestbook 4096 Feb 9 13:37 data
The persistent Volumes are created with an YAML file with the pattern:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
annotations:
pv.beta.kubernetes.io/gid: "1003"
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /home/user/pv/pv01
server: 192.168.99.104
The Pod is created with this YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: get-started
spec:
replicas: 3
template:
metadata:
labels:
app: get-started
spec:
containers:
- name: get-started
image: docker.io/cebberg/get-started:custom5
ports:
- containerPort: 2525
env:
- name: GET_HOSTS_FROM
value: dns
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: database-password
volumeMounts:
- name: log-storage
mountPath: "/data/"
imagePullPolicy: Always
securityContext:
privileged: false
volumes:
- name: log-storage
persistentVolumeClaim:
claimName: get-started
restartPolicy: Always
dnsPolicy: ClusterFirst
And the PVC with YAML file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: get-started
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
I tried different configuration for the owner/group of the folders. If I use my normal user (which is the same on all systems) as owner and group, I can mount manually and read and write in the folder. But I don't want to use my normal user, but use another user (and especially not a privileged user).
What permissions do I have to set, so that the user I create in my Pod can write to the NFS volume?
I found the solution to my problem:
By accident I found log entries, that appear everytime I try to access the NFS volumes from my pods. They say, that SELinux has blocked the access to the folder because of different security context.
To resolve the issue, I simply had to turn on the corresponding SELinux boolean virt_use_nfs with the command
setsebool virt_use_nfs on
This has to be done on all nodes to make it work correctly.
EDIT:
I remembered, that I now use sec=sys as mount option in /etc/exports. This provides access controll based on UID and GID of the user creating a file (which seems to be the default). If you use sec=none you also have to turn on the SELinux boolean nfsd_anon_write, so that the user nfsnobody has the permission to create files.
How can I inject code/files directly into a container in Kubernetes on Google Cloud Engine, similar to the way that you can mount host files / directories with Docker, e.g.
docker run -d --name nginx -p 443:443 -v "/nginx.ssl.conf:/etc/nginx/conf.d/default.conf"
Thanks
It is possible to use ConfigMaps to achieve that goal:
The following example mounts a mariadb configuration file into a mariadb POD:
ConfigMap
apiVersion: v1
data:
charset.cnf: |
[client]
# Default is Latin1, if you need UTF-8 set this (also in server section)
default-character-set = utf8
[mysqld]
#
# * Character sets
#
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
character-set-server = utf8
collation-server = utf8_unicode_ci
kind: ConfigMap
metadata:
name: mariadb-configmap
MariaDB deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
version: 10.1.16
spec:
containers:
- name: mariadb
image: mariadb:10.1.16
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: rootpassword
volumeMounts:
- name: mariadb-data
mountPath: /var/lib/mysql
- name: mariadb-config-file
mountPath: /etc/mysql/conf.d
volumes:
- name: mariadb-data
hostPath:
path: /var/lib/data/mariadb
- name: mariadb-config-file
configMap:
name: mariadb-configmap
It is also possible to use subPath feature that is available in kubernetes from version 1.3, as stated here.
I'm not sure you can do that exactly. Kubernetes does things quite differently than docker, and isn't really ideal for interacting with the 'host' you are probably used to with docker.
A few alternative possibilities come to mind. First, and probably least ideal but closest to what you are asking, would be to add the file after the container is running, either by adding commands or args to the pod spec, or using kubectl exec and echo'ing the contents into the file. Second would be to create a volume where that file already exists, e.g. create a GCE or EBS disk, add that file, and then mount the file location (read-only) in the container's spec. Third, would be to create a new docker image where that file or other code already exists.
For the first option, the kubectl exec would be for one-off jobs, it isn't very scalable/repeatable. Any creation/fetching at runtime adds that much overhead to the start time for the container, so I normally go with the third option, building a new docker image whenever the file or code changes. The more you change it, the more you'll probably want a CI system (like drone) to help automate the process.
Add a comment if I should expand any of these options with more details.
Kubernetes allows you to mount volumes into your pod. One such volume type is hostPath (link) which allows you to mount a directory from the host into the pod.